Mar 14 08:57:26 crc systemd[1]: Starting Kubernetes Kubelet... Mar 14 08:57:26 crc restorecon[4700]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 14 08:57:26 crc restorecon[4700]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 14 08:57:27 crc kubenswrapper[4869]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.449958 4869 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455364 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455396 4869 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455406 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455418 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455430 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455442 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455454 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455467 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455481 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455495 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455541 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455551 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455559 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455576 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455584 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455593 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455600 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455609 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455617 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455625 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455634 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455642 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455650 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455658 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455666 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455674 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455683 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455691 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455699 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455707 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455716 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455724 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455732 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455739 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455748 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455757 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455764 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455774 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455784 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455793 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455801 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455810 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455819 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455830 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455838 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455846 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455854 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455862 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455869 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455877 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455884 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455892 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455900 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455907 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455915 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455923 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455930 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455938 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455946 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455953 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455961 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455969 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455977 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455986 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.455994 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456001 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456009 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456017 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456025 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456032 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.456040 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456174 4869 flags.go:64] FLAG: --address="0.0.0.0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456196 4869 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456210 4869 flags.go:64] FLAG: --anonymous-auth="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456221 4869 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456233 4869 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456241 4869 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456253 4869 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456264 4869 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456273 4869 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456282 4869 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456292 4869 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456302 4869 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456311 4869 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456320 4869 flags.go:64] FLAG: --cgroup-root="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456329 4869 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456338 4869 flags.go:64] FLAG: --client-ca-file="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456347 4869 flags.go:64] FLAG: --cloud-config="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456355 4869 flags.go:64] FLAG: --cloud-provider="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456363 4869 flags.go:64] FLAG: --cluster-dns="[]" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456381 4869 flags.go:64] FLAG: --cluster-domain="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456391 4869 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456401 4869 flags.go:64] FLAG: --config-dir="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456413 4869 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456422 4869 flags.go:64] FLAG: --container-log-max-files="5" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456434 4869 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456443 4869 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456452 4869 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456464 4869 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456475 4869 flags.go:64] FLAG: --contention-profiling="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456487 4869 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456498 4869 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456540 4869 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456550 4869 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456562 4869 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456571 4869 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456579 4869 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456588 4869 flags.go:64] FLAG: --enable-load-reader="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456597 4869 flags.go:64] FLAG: --enable-server="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456606 4869 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456617 4869 flags.go:64] FLAG: --event-burst="100" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456626 4869 flags.go:64] FLAG: --event-qps="50" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456635 4869 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456645 4869 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456653 4869 flags.go:64] FLAG: --eviction-hard="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456664 4869 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456672 4869 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456681 4869 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456690 4869 flags.go:64] FLAG: --eviction-soft="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456699 4869 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456708 4869 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456717 4869 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456725 4869 flags.go:64] FLAG: --experimental-mounter-path="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456734 4869 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456743 4869 flags.go:64] FLAG: --fail-swap-on="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456752 4869 flags.go:64] FLAG: --feature-gates="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456762 4869 flags.go:64] FLAG: --file-check-frequency="20s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456772 4869 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456781 4869 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456792 4869 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456801 4869 flags.go:64] FLAG: --healthz-port="10248" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456810 4869 flags.go:64] FLAG: --help="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456819 4869 flags.go:64] FLAG: --hostname-override="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456828 4869 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456838 4869 flags.go:64] FLAG: --http-check-frequency="20s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456847 4869 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456855 4869 flags.go:64] FLAG: --image-credential-provider-config="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456864 4869 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456873 4869 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456882 4869 flags.go:64] FLAG: --image-service-endpoint="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456891 4869 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456900 4869 flags.go:64] FLAG: --kube-api-burst="100" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456909 4869 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456946 4869 flags.go:64] FLAG: --kube-api-qps="50" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456958 4869 flags.go:64] FLAG: --kube-reserved="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456968 4869 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456977 4869 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456986 4869 flags.go:64] FLAG: --kubelet-cgroups="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.456995 4869 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457004 4869 flags.go:64] FLAG: --lock-file="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457013 4869 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457022 4869 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457031 4869 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457044 4869 flags.go:64] FLAG: --log-json-split-stream="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457053 4869 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457062 4869 flags.go:64] FLAG: --log-text-split-stream="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457071 4869 flags.go:64] FLAG: --logging-format="text" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457080 4869 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457089 4869 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457098 4869 flags.go:64] FLAG: --manifest-url="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457107 4869 flags.go:64] FLAG: --manifest-url-header="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457127 4869 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457136 4869 flags.go:64] FLAG: --max-open-files="1000000" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457147 4869 flags.go:64] FLAG: --max-pods="110" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457156 4869 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457166 4869 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457176 4869 flags.go:64] FLAG: --memory-manager-policy="None" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457184 4869 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457193 4869 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457202 4869 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457211 4869 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457230 4869 flags.go:64] FLAG: --node-status-max-images="50" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457238 4869 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457247 4869 flags.go:64] FLAG: --oom-score-adj="-999" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457256 4869 flags.go:64] FLAG: --pod-cidr="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457265 4869 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457278 4869 flags.go:64] FLAG: --pod-manifest-path="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457287 4869 flags.go:64] FLAG: --pod-max-pids="-1" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457296 4869 flags.go:64] FLAG: --pods-per-core="0" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457305 4869 flags.go:64] FLAG: --port="10250" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457314 4869 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457323 4869 flags.go:64] FLAG: --provider-id="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457332 4869 flags.go:64] FLAG: --qos-reserved="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457342 4869 flags.go:64] FLAG: --read-only-port="10255" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457351 4869 flags.go:64] FLAG: --register-node="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457359 4869 flags.go:64] FLAG: --register-schedulable="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457368 4869 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457382 4869 flags.go:64] FLAG: --registry-burst="10" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457391 4869 flags.go:64] FLAG: --registry-qps="5" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457400 4869 flags.go:64] FLAG: --reserved-cpus="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457409 4869 flags.go:64] FLAG: --reserved-memory="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457420 4869 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457429 4869 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457438 4869 flags.go:64] FLAG: --rotate-certificates="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457453 4869 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457462 4869 flags.go:64] FLAG: --runonce="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457471 4869 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457480 4869 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457489 4869 flags.go:64] FLAG: --seccomp-default="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457498 4869 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457538 4869 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457557 4869 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457568 4869 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457577 4869 flags.go:64] FLAG: --storage-driver-password="root" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457586 4869 flags.go:64] FLAG: --storage-driver-secure="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457595 4869 flags.go:64] FLAG: --storage-driver-table="stats" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457604 4869 flags.go:64] FLAG: --storage-driver-user="root" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457613 4869 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457622 4869 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457631 4869 flags.go:64] FLAG: --system-cgroups="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457640 4869 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457654 4869 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457663 4869 flags.go:64] FLAG: --tls-cert-file="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457672 4869 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457684 4869 flags.go:64] FLAG: --tls-min-version="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457692 4869 flags.go:64] FLAG: --tls-private-key-file="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457701 4869 flags.go:64] FLAG: --topology-manager-policy="none" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457710 4869 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457719 4869 flags.go:64] FLAG: --topology-manager-scope="container" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457728 4869 flags.go:64] FLAG: --v="2" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457739 4869 flags.go:64] FLAG: --version="false" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457751 4869 flags.go:64] FLAG: --vmodule="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457761 4869 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.457771 4869 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.457974 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.457985 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.457998 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458006 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458016 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458024 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458032 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458040 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458048 4869 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458064 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458072 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458080 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458091 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458101 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458110 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458119 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458129 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458139 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458148 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458156 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458164 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458172 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458180 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458189 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458197 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458206 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458214 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458222 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458230 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458237 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458245 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458253 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458261 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458269 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458279 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458287 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458297 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458307 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458315 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458323 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458332 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458343 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458351 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458359 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458367 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458375 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458383 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458391 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458399 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458408 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458416 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458424 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458432 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458440 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458447 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458455 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458463 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458470 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458478 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458486 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458494 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458501 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458535 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458546 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458556 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458565 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458580 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458591 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458601 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458609 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.458617 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.458629 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.467839 4869 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.467865 4869 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467939 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467946 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467950 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467954 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467960 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467964 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467967 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467971 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467975 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467978 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467982 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467987 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467992 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.467996 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468000 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468004 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468008 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468012 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468015 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468019 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468022 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468026 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468030 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468033 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468037 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468040 4869 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468044 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468048 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468052 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468058 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468064 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468069 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468074 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468079 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468083 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468088 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468092 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468096 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468100 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468105 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468112 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468117 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468122 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468126 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468131 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468136 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468141 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468146 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468150 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468154 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468159 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468163 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468167 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468172 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468176 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468180 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468183 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468187 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468191 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468194 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468198 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468203 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468210 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468214 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468218 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468223 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468227 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468232 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468236 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468240 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468243 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.468250 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468427 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468440 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468445 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468450 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468455 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468460 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468464 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468468 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468473 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468477 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468481 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468485 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468490 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468494 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468499 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468522 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468530 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468536 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468541 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468546 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468551 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468555 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468561 4869 feature_gate.go:330] unrecognized feature gate: Example Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468566 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468571 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468576 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468581 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468585 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468590 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468595 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468599 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468607 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468613 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468619 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468625 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468631 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468637 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468642 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468647 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468652 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468658 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468662 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468666 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468669 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468673 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468676 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468680 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468683 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468687 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468690 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468694 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468698 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468701 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468705 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468708 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468713 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468717 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468722 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468727 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468731 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468736 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468741 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468745 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468750 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468755 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468760 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468765 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468770 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468775 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468780 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.468784 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.468792 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.469592 4869 server.go:940] "Client rotation is on, will bootstrap in background" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.472952 4869 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.477376 4869 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.477537 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.479389 4869 server.go:997] "Starting client certificate rotation" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.479429 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.479665 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.512948 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.515597 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.515984 4869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.537147 4869 log.go:25] "Validated CRI v1 runtime API" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.581035 4869 log.go:25] "Validated CRI v1 image API" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.583956 4869 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.589337 4869 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-14-08-51-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.589385 4869 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.616850 4869 manager.go:217] Machine: {Timestamp:2026-03-14 08:57:27.614085803 +0000 UTC m=+0.586367936 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b9f13929-24fa-42f7-b237-4766a535e935 BootID:e8736076-5c62-4abb-8b49-b2af716eaec4 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:ed:1c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:ed:1c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0b:2c:ec Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f1:76:c3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ad:49:77 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:90:eb:a6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:35:27:68:c9:f1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:da:9a:7a:08:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.617246 4869 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.617503 4869 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.619368 4869 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.619730 4869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.619783 4869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.620112 4869 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.620137 4869 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.620747 4869 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.620820 4869 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.621228 4869 state_mem.go:36] "Initialized new in-memory state store" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.621788 4869 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.625122 4869 kubelet.go:418] "Attempting to sync node with API server" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.625143 4869 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.625157 4869 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.625169 4869 kubelet.go:324] "Adding apiserver pod source" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.625180 4869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.629876 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.630092 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.631171 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.631272 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.632661 4869 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.633789 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.636398 4869 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638060 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638118 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638141 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638165 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638198 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638233 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638254 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638281 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638298 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638313 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638333 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638348 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.638394 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.639120 4869 server.go:1280] "Started kubelet" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.639522 4869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.639470 4869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.640608 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:27 crc systemd[1]: Started Kubernetes Kubelet. Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641327 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641353 4869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641547 4869 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641562 4869 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641624 4869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.641677 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.641863 4869 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642138 4869 factory.go:55] Registering systemd factory Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642153 4869 factory.go:221] Registration of the systemd container factory successfully Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.642325 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.642420 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642797 4869 factory.go:153] Registering CRI-O factory Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642822 4869 factory.go:221] Registration of the crio container factory successfully Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642883 4869 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642912 4869 factory.go:103] Registering Raw factory Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.642931 4869 manager.go:1196] Started watching for new ooms in manager Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.643040 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="200ms" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.643912 4869 manager.go:319] Starting recovery of all containers Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.646358 4869 server.go:460] "Adding debug handlers to kubelet server" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.645686 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.660389 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.661958 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662000 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662014 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662028 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662064 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662077 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662089 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662103 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662114 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662126 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662136 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662147 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662162 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662173 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662184 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662194 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662204 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662216 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662226 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662239 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662249 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662260 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662271 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662283 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662292 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662304 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662315 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662329 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662339 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662349 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662360 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662370 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662380 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662412 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662421 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662432 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662442 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662452 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662462 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662472 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662482 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662493 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662517 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662528 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662540 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662552 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662562 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662573 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662583 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662592 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662603 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662618 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662629 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662641 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662651 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662662 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662672 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662705 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662716 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662726 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662738 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662749 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662764 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662773 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662783 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662793 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662803 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662813 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662823 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662832 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662842 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662852 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662861 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662870 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662881 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662890 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662901 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662912 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662923 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662932 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662942 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662952 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662964 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.662974 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663005 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663016 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663026 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663036 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663046 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663057 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663068 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663078 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663089 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663102 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663112 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663122 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663134 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663145 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663154 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663165 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663175 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663184 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663196 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.663212 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664769 4869 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664797 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664810 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664821 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664835 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664844 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664854 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664864 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664874 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664885 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664895 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664905 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664916 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664926 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664936 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664948 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664964 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664974 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664986 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.664996 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665006 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665016 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665026 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665036 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665046 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665058 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665067 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665077 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665087 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665097 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665107 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665116 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665126 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665135 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665145 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665155 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665165 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665174 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665184 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665192 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665202 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665212 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665220 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665229 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665239 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665248 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665257 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665266 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665276 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665286 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665295 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665315 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665326 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665337 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665350 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665359 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665370 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665379 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665388 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665397 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665407 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665415 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665426 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665437 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665448 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665458 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665468 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665477 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665489 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665499 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665527 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665541 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665555 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665567 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665579 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665591 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665602 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665613 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665623 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665633 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665643 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665653 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665663 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665672 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665683 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665693 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665702 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665712 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665723 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665732 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665741 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665750 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665759 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665769 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665778 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665787 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665796 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665806 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665815 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665824 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665834 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665844 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665854 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665864 4869 reconstruct.go:97] "Volume reconstruction finished" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.665872 4869 reconciler.go:26] "Reconciler: start to sync state" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.680909 4869 manager.go:324] Recovery completed Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.690198 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.691449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.691523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.691534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.692413 4869 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.692428 4869 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.692445 4869 state_mem.go:36] "Initialized new in-memory state store" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.700613 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.702450 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.702482 4869 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.702534 4869 kubelet.go:2335] "Starting kubelet main sync loop" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.702570 4869 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 08:57:27 crc kubenswrapper[4869]: W0314 08:57:27.703466 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.703554 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.717468 4869 policy_none.go:49] "None policy: Start" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.718368 4869 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.718389 4869 state_mem.go:35] "Initializing new in-memory state store" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.742186 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770046 4869 manager.go:334] "Starting Device Plugin manager" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770104 4869 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770119 4869 server.go:79] "Starting device plugin registration server" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770602 4869 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770623 4869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770796 4869 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770885 4869 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.770899 4869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.777741 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.803654 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.803810 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.804957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.805021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.805035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.805233 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.805355 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.805385 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806644 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806733 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.806757 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808390 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808536 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.808580 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809416 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809687 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.809735 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.811656 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.813143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.813179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.813192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.844280 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="400ms" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868054 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868119 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868358 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.868436 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.870872 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.871928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.871962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.871970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.871996 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:27 crc kubenswrapper[4869]: E0314 08:57:27.872240 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970332 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970405 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970408 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970417 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970716 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:27 crc kubenswrapper[4869]: I0314 08:57:27.970654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.072337 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.073366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.073403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.073416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.073441 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.073709 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.141138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.176249 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.182329 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-502ed30e0bb7f7438c63c8438e6dbfe7d46304c0ef5def53edd3976a4b290e5a WatchSource:0}: Error finding container 502ed30e0bb7f7438c63c8438e6dbfe7d46304c0ef5def53edd3976a4b290e5a: Status 404 returned error can't find the container with id 502ed30e0bb7f7438c63c8438e6dbfe7d46304c0ef5def53edd3976a4b290e5a Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.187786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.208366 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2fda43c15896428611a420582ac65df19df4aa102ec99fdfb6a88d096b596af6 WatchSource:0}: Error finding container 2fda43c15896428611a420582ac65df19df4aa102ec99fdfb6a88d096b596af6: Status 404 returned error can't find the container with id 2fda43c15896428611a420582ac65df19df4aa102ec99fdfb6a88d096b596af6 Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.208953 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a202cca6ce878771140f1bd130b3320bb34aff2f8864f649e2b4cb2eef5e7bf1 WatchSource:0}: Error finding container a202cca6ce878771140f1bd130b3320bb34aff2f8864f649e2b4cb2eef5e7bf1: Status 404 returned error can't find the container with id a202cca6ce878771140f1bd130b3320bb34aff2f8864f649e2b4cb2eef5e7bf1 Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.210090 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.213855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.224142 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1beca04f35e384f5a28b91daba896ee404843c66fb64dac373e27f49af21457b WatchSource:0}: Error finding container 1beca04f35e384f5a28b91daba896ee404843c66fb64dac373e27f49af21457b: Status 404 returned error can't find the container with id 1beca04f35e384f5a28b91daba896ee404843c66fb64dac373e27f49af21457b Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.231585 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-602b0d49806fa1d43b305eec507ed195f65b9f7fa1ec10a83c072252594f87f6 WatchSource:0}: Error finding container 602b0d49806fa1d43b305eec507ed195f65b9f7fa1ec10a83c072252594f87f6: Status 404 returned error can't find the container with id 602b0d49806fa1d43b305eec507ed195f65b9f7fa1ec10a83c072252594f87f6 Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.245076 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="800ms" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.474806 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.476280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.476313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.476322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.476344 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.476748 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.578414 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.578524 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.607284 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.607354 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.641494 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.709322 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"602b0d49806fa1d43b305eec507ed195f65b9f7fa1ec10a83c072252594f87f6"} Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.713489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1beca04f35e384f5a28b91daba896ee404843c66fb64dac373e27f49af21457b"} Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.714493 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a202cca6ce878771140f1bd130b3320bb34aff2f8864f649e2b4cb2eef5e7bf1"} Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.715396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2fda43c15896428611a420582ac65df19df4aa102ec99fdfb6a88d096b596af6"} Mar 14 08:57:28 crc kubenswrapper[4869]: I0314 08:57:28.716421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"502ed30e0bb7f7438c63c8438e6dbfe7d46304c0ef5def53edd3976a4b290e5a"} Mar 14 08:57:28 crc kubenswrapper[4869]: W0314 08:57:28.795449 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:28 crc kubenswrapper[4869]: E0314 08:57:28.795559 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:29 crc kubenswrapper[4869]: E0314 08:57:29.046105 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="1.6s" Mar 14 08:57:29 crc kubenswrapper[4869]: W0314 08:57:29.212290 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:29 crc kubenswrapper[4869]: E0314 08:57:29.212428 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.277040 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.278670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.278753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.278780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.278831 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:29 crc kubenswrapper[4869]: E0314 08:57:29.279607 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.641746 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.654848 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:29 crc kubenswrapper[4869]: E0314 08:57:29.656535 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.719745 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61" exitCode=0 Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.719801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.719890 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.721064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.721091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.721102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.725221 4869 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e" exitCode=0 Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.725303 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.725305 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726885 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c" exitCode=0 Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.726980 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.727685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.727715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.727725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730109 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d4ee4f1b46d2eab88614dc02cbc4239ff5317feb3132a80acd0c4c8132388d14"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730152 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b917cd2840a112af49cdeaf81d121bb8e5e6835cb19d3598b412c890a15c1d49"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730255 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.730842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.731937 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45" exitCode=0 Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.731966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45"} Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.732055 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.732585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.732601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.732609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.742130 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.743319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.743356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:29 crc kubenswrapper[4869]: I0314 08:57:29.743368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: W0314 08:57:30.444998 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:30 crc kubenswrapper[4869]: E0314 08:57:30.446310 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.148:6443: connect: connection refused" logger="UnhandledError" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.641726 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.148:6443: connect: connection refused Mar 14 08:57:30 crc kubenswrapper[4869]: E0314 08:57:30.647355 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="3.2s" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.737918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.737958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.737970 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.737980 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.739227 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c" exitCode=0 Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.739292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.739424 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.740223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.740256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.740267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.742626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.742669 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.742685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.742696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.744545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.744574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.744585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.750161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3"} Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.750192 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.750219 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.751917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.880151 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.881182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.881217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.881227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:30 crc kubenswrapper[4869]: I0314 08:57:30.881250 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:30 crc kubenswrapper[4869]: E0314 08:57:30.881694 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.148:6443: connect: connection refused" node="crc" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.755734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61a8731c6245e6affcc3e52c52d36bba152d01c7d1f9801f8a1dd7f622aa209a"} Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.755848 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.756810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.756844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.756860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759590 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026" exitCode=0 Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759670 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026"} Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759701 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759770 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759853 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.759922 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.761951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:31 crc kubenswrapper[4869]: I0314 08:57:31.762841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.380192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.431484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386"} Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae"} Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8"} Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf"} Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769634 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b"} Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769483 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769665 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769737 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.769594 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.771716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.885258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.885578 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.887136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.887219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.887239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:32 crc kubenswrapper[4869]: I0314 08:57:32.893198 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.749661 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.772731 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.772843 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.772900 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.772920 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.772846 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.774975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.775021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:33 crc kubenswrapper[4869]: I0314 08:57:33.775039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.016261 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.081787 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.083189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.083229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.083240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.083268 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.211993 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.774901 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.774980 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.774943 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.776962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.925764 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.925954 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.927038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.927076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:34 crc kubenswrapper[4869]: I0314 08:57:34.927085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:36 crc kubenswrapper[4869]: I0314 08:57:36.407752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:36 crc kubenswrapper[4869]: I0314 08:57:36.407949 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:36 crc kubenswrapper[4869]: I0314 08:57:36.409216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:36 crc kubenswrapper[4869]: I0314 08:57:36.409258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:36 crc kubenswrapper[4869]: I0314 08:57:36.409267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:37 crc kubenswrapper[4869]: E0314 08:57:37.778624 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:37 crc kubenswrapper[4869]: I0314 08:57:37.827343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:37 crc kubenswrapper[4869]: I0314 08:57:37.827601 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:37 crc kubenswrapper[4869]: I0314 08:57:37.828666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:37 crc kubenswrapper[4869]: I0314 08:57:37.828705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:37 crc kubenswrapper[4869]: I0314 08:57:37.828716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.408388 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.408462 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.713187 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.713365 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.719946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.719992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:39 crc kubenswrapper[4869]: I0314 08:57:39.720010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:41 crc kubenswrapper[4869]: W0314 08:57:41.394974 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.395119 4869 trace.go:236] Trace[1449478801]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Mar-2026 08:57:31.393) (total time: 10001ms): Mar 14 08:57:41 crc kubenswrapper[4869]: Trace[1449478801]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:57:41.394) Mar 14 08:57:41 crc kubenswrapper[4869]: Trace[1449478801]: [10.001551907s] [10.001551907s] END Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.395147 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 14 08:57:41 crc kubenswrapper[4869]: W0314 08:57:41.396340 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.396406 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.396895 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.398123 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.401333 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.403208 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:41 crc kubenswrapper[4869]: W0314 08:57:41.404463 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.404557 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.408223 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.408875 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.409018 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.413376 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.413447 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 14 08:57:41 crc kubenswrapper[4869]: W0314 08:57:41.417561 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:41 crc kubenswrapper[4869]: E0314 08:57:41.417654 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.605186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.605426 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.606624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.606668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.606678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:41 crc kubenswrapper[4869]: I0314 08:57:41.643423 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:41Z is after 2026-02-23T05:33:13Z Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.644113 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:42Z is after 2026-02-23T05:33:13Z Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.795903 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.797547 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61a8731c6245e6affcc3e52c52d36bba152d01c7d1f9801f8a1dd7f622aa209a" exitCode=255 Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.797627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"61a8731c6245e6affcc3e52c52d36bba152d01c7d1f9801f8a1dd7f622aa209a"} Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.798303 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.799058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.799085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.799096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:42 crc kubenswrapper[4869]: I0314 08:57:42.799671 4869 scope.go:117] "RemoveContainer" containerID="61a8731c6245e6affcc3e52c52d36bba152d01c7d1f9801f8a1dd7f622aa209a" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.645131 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:43Z is after 2026-02-23T05:33:13Z Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.802252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.803106 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.805087 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" exitCode=255 Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.805209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e"} Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.805529 4869 scope.go:117] "RemoveContainer" containerID="61a8731c6245e6affcc3e52c52d36bba152d01c7d1f9801f8a1dd7f622aa209a" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.805656 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.806656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.807032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.807068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.808099 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:43 crc kubenswrapper[4869]: E0314 08:57:43.808427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:43 crc kubenswrapper[4869]: I0314 08:57:43.847743 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.219962 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.643761 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:44Z is after 2026-02-23T05:33:13Z Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.809558 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.812060 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.812812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.812853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.812864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.813399 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:44 crc kubenswrapper[4869]: E0314 08:57:44.813593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:44 crc kubenswrapper[4869]: I0314 08:57:44.817183 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:45 crc kubenswrapper[4869]: W0314 08:57:45.593034 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:45Z is after 2026-02-23T05:33:13Z Mar 14 08:57:45 crc kubenswrapper[4869]: E0314 08:57:45.593115 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.645932 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:45Z is after 2026-02-23T05:33:13Z Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.814595 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.815545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.815596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.815607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:45 crc kubenswrapper[4869]: I0314 08:57:45.816308 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:45 crc kubenswrapper[4869]: E0314 08:57:45.816541 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:46 crc kubenswrapper[4869]: W0314 08:57:46.013039 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z Mar 14 08:57:46 crc kubenswrapper[4869]: E0314 08:57:46.013162 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.644403 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.816905 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.817951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.818002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.818016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:46 crc kubenswrapper[4869]: I0314 08:57:46.818721 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:46 crc kubenswrapper[4869]: E0314 08:57:46.818937 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:46 crc kubenswrapper[4869]: W0314 08:57:46.845150 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z Mar 14 08:57:46 crc kubenswrapper[4869]: E0314 08:57:46.845241 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.645845 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z Mar 14 08:57:47 crc kubenswrapper[4869]: E0314 08:57:47.778957 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:47 crc kubenswrapper[4869]: E0314 08:57:47.802385 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.803385 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.805087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.805152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.805173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.805224 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:47 crc kubenswrapper[4869]: E0314 08:57:47.808475 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:47Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.827436 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.827782 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.829499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.829592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.829605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:47 crc kubenswrapper[4869]: I0314 08:57:47.830273 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:47 crc kubenswrapper[4869]: E0314 08:57:47.830472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:48 crc kubenswrapper[4869]: I0314 08:57:48.647078 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:48Z is after 2026-02-23T05:33:13Z Mar 14 08:57:49 crc kubenswrapper[4869]: I0314 08:57:49.408027 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:49 crc kubenswrapper[4869]: I0314 08:57:49.408111 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:49 crc kubenswrapper[4869]: I0314 08:57:49.644215 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:49Z is after 2026-02-23T05:33:13Z Mar 14 08:57:49 crc kubenswrapper[4869]: I0314 08:57:49.876627 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:57:49 crc kubenswrapper[4869]: E0314 08:57:49.880904 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:50 crc kubenswrapper[4869]: I0314 08:57:50.644544 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:50Z is after 2026-02-23T05:33:13Z Mar 14 08:57:51 crc kubenswrapper[4869]: E0314 08:57:51.411823 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:57:51 crc kubenswrapper[4869]: W0314 08:57:51.555311 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z Mar 14 08:57:51 crc kubenswrapper[4869]: E0314 08:57:51.555390 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.629751 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.630099 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.631908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.631973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.631997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.643177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.644272 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:51Z is after 2026-02-23T05:33:13Z Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.830606 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.832010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.832065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:51 crc kubenswrapper[4869]: I0314 08:57:51.832076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:52 crc kubenswrapper[4869]: I0314 08:57:52.644979 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:52Z is after 2026-02-23T05:33:13Z Mar 14 08:57:53 crc kubenswrapper[4869]: I0314 08:57:53.644548 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:53Z is after 2026-02-23T05:33:13Z Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.644382 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:54Z is after 2026-02-23T05:33:13Z Mar 14 08:57:54 crc kubenswrapper[4869]: E0314 08:57:54.806142 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:54Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.809452 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.811045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.811153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.811173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:54 crc kubenswrapper[4869]: I0314 08:57:54.811212 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:57:54 crc kubenswrapper[4869]: E0314 08:57:54.814375 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:54Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:57:55 crc kubenswrapper[4869]: I0314 08:57:55.645782 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:55Z is after 2026-02-23T05:33:13Z Mar 14 08:57:56 crc kubenswrapper[4869]: W0314 08:57:56.150038 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:56Z is after 2026-02-23T05:33:13Z Mar 14 08:57:56 crc kubenswrapper[4869]: E0314 08:57:56.150120 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:56 crc kubenswrapper[4869]: I0314 08:57:56.644098 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:56Z is after 2026-02-23T05:33:13Z Mar 14 08:57:57 crc kubenswrapper[4869]: W0314 08:57:57.130453 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:57Z is after 2026-02-23T05:33:13Z Mar 14 08:57:57 crc kubenswrapper[4869]: E0314 08:57:57.130577 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:57 crc kubenswrapper[4869]: W0314 08:57:57.437433 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:57Z is after 2026-02-23T05:33:13Z Mar 14 08:57:57 crc kubenswrapper[4869]: E0314 08:57:57.437531 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:57:57 crc kubenswrapper[4869]: I0314 08:57:57.644395 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:57Z is after 2026-02-23T05:33:13Z Mar 14 08:57:57 crc kubenswrapper[4869]: E0314 08:57:57.779361 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.643752 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:58Z is after 2026-02-23T05:33:13Z Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.703419 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.704597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.704642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.704652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:58 crc kubenswrapper[4869]: I0314 08:57:58.705179 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.409761 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.410101 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.410223 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.410431 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.411651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.411682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.411695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.412288 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.412468 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21" gracePeriod=30 Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.644187 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:57:59Z is after 2026-02-23T05:33:13Z Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.856111 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.857631 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.861842 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" exitCode=255 Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.861899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4"} Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.861994 4869 scope.go:117] "RemoveContainer" containerID="9d0f8890a34c89602dae53ac32a86f6dd32aba3b212061fefaded28740a9c97e" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.862206 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.863137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.863241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.863319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.867296 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.867969 4869 scope.go:117] "RemoveContainer" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.868061 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21" exitCode=255 Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.868138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21"} Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.868180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c"} Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.868308 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:57:59 crc kubenswrapper[4869]: E0314 08:57:59.868579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.869469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.869502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:57:59 crc kubenswrapper[4869]: I0314 08:57:59.869582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:00 crc kubenswrapper[4869]: I0314 08:58:00.645405 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:00Z is after 2026-02-23T05:33:13Z Mar 14 08:58:00 crc kubenswrapper[4869]: I0314 08:58:00.874392 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:58:01 crc kubenswrapper[4869]: E0314 08:58:01.415758 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.644703 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:01Z is after 2026-02-23T05:33:13Z Mar 14 08:58:01 crc kubenswrapper[4869]: E0314 08:58:01.812050 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:01Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.815354 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.816988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.817030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.817042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:01 crc kubenswrapper[4869]: I0314 08:58:01.817068 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:01 crc kubenswrapper[4869]: E0314 08:58:01.822475 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:01Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:58:02 crc kubenswrapper[4869]: I0314 08:58:02.644407 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:02Z is after 2026-02-23T05:33:13Z Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.644423 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:03Z is after 2026-02-23T05:33:13Z Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.750098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.750300 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.751677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.751717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.751729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.847725 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.847915 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.849306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.849351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.849361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:03 crc kubenswrapper[4869]: I0314 08:58:03.849903 4869 scope.go:117] "RemoveContainer" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" Mar 14 08:58:03 crc kubenswrapper[4869]: E0314 08:58:03.850085 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:04 crc kubenswrapper[4869]: I0314 08:58:04.644853 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:04Z is after 2026-02-23T05:33:13Z Mar 14 08:58:05 crc kubenswrapper[4869]: I0314 08:58:05.644308 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:05Z is after 2026-02-23T05:33:13Z Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.408448 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.408716 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.409975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.410014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.410027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:06 crc kubenswrapper[4869]: I0314 08:58:06.645822 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:06Z is after 2026-02-23T05:33:13Z Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.362152 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:58:07 crc kubenswrapper[4869]: E0314 08:58:07.365883 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:58:07 crc kubenswrapper[4869]: E0314 08:58:07.367247 4869 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.646355 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:07Z is after 2026-02-23T05:33:13Z Mar 14 08:58:07 crc kubenswrapper[4869]: E0314 08:58:07.779493 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.827752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.828020 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.829578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.829638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.829653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:07 crc kubenswrapper[4869]: I0314 08:58:07.830371 4869 scope.go:117] "RemoveContainer" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" Mar 14 08:58:07 crc kubenswrapper[4869]: E0314 08:58:07.830580 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.644898 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:08Z is after 2026-02-23T05:33:13Z Mar 14 08:58:08 crc kubenswrapper[4869]: E0314 08:58:08.817456 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:08Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.823498 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.824561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.824610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.824620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:08 crc kubenswrapper[4869]: I0314 08:58:08.824644 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:08 crc kubenswrapper[4869]: E0314 08:58:08.827256 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:08Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:58:09 crc kubenswrapper[4869]: I0314 08:58:09.409147 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:58:09 crc kubenswrapper[4869]: I0314 08:58:09.409303 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:58:09 crc kubenswrapper[4869]: I0314 08:58:09.644932 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:09Z is after 2026-02-23T05:33:13Z Mar 14 08:58:10 crc kubenswrapper[4869]: I0314 08:58:10.647965 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:10Z is after 2026-02-23T05:33:13Z Mar 14 08:58:11 crc kubenswrapper[4869]: E0314 08:58:11.418888 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:11Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:11 crc kubenswrapper[4869]: I0314 08:58:11.645353 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:11Z is after 2026-02-23T05:33:13Z Mar 14 08:58:12 crc kubenswrapper[4869]: I0314 08:58:12.644711 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:12Z is after 2026-02-23T05:33:13Z Mar 14 08:58:12 crc kubenswrapper[4869]: W0314 08:58:12.758670 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:12Z is after 2026-02-23T05:33:13Z Mar 14 08:58:12 crc kubenswrapper[4869]: E0314 08:58:12.759004 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:58:13 crc kubenswrapper[4869]: W0314 08:58:13.087064 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:13Z is after 2026-02-23T05:33:13Z Mar 14 08:58:13 crc kubenswrapper[4869]: E0314 08:58:13.087175 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:58:13 crc kubenswrapper[4869]: I0314 08:58:13.644569 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:13Z is after 2026-02-23T05:33:13Z Mar 14 08:58:14 crc kubenswrapper[4869]: I0314 08:58:14.644892 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:14Z is after 2026-02-23T05:33:13Z Mar 14 08:58:15 crc kubenswrapper[4869]: W0314 08:58:15.566204 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:15Z is after 2026-02-23T05:33:13Z Mar 14 08:58:15 crc kubenswrapper[4869]: E0314 08:58:15.566317 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.644094 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:15Z is after 2026-02-23T05:33:13Z Mar 14 08:58:15 crc kubenswrapper[4869]: E0314 08:58:15.820618 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:15Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.827847 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.829493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.829538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.829549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:15 crc kubenswrapper[4869]: I0314 08:58:15.829575 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:15 crc kubenswrapper[4869]: E0314 08:58:15.831974 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:15Z is after 2026-02-23T05:33:13Z" node="crc" Mar 14 08:58:16 crc kubenswrapper[4869]: W0314 08:58:16.539951 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:16Z is after 2026-02-23T05:33:13Z Mar 14 08:58:16 crc kubenswrapper[4869]: E0314 08:58:16.540028 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 14 08:58:16 crc kubenswrapper[4869]: I0314 08:58:16.645187 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:16Z is after 2026-02-23T05:33:13Z Mar 14 08:58:17 crc kubenswrapper[4869]: I0314 08:58:17.644545 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:17Z is after 2026-02-23T05:33:13Z Mar 14 08:58:17 crc kubenswrapper[4869]: E0314 08:58:17.780699 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:18 crc kubenswrapper[4869]: I0314 08:58:18.644207 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:58:18Z is after 2026-02-23T05:33:13Z Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.408314 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.408435 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.645559 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.702917 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.704336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.704987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.705223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.706479 4869 scope.go:117] "RemoveContainer" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.933915 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.935931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc"} Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.936112 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.937383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.937412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:19 crc kubenswrapper[4869]: I0314 08:58:19.937435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.648057 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.940252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.940922 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.942720 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" exitCode=255 Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.942760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc"} Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.942808 4869 scope.go:117] "RemoveContainer" containerID="7ac66a352ad087e4758bb492953da7169f05bfed409bcf30f8f7dff0ff8ab5e4" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.943026 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.943973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.944003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.944011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:20 crc kubenswrapper[4869]: I0314 08:58:20.944560 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:58:20 crc kubenswrapper[4869]: E0314 08:58:20.944728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.425965 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f094471f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,LastTimestamp:2026-03-14 08:57:27.639077364 +0000 UTC m=+0.611359457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.431276 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.437336 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.447249 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.451839 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f115a5874 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.774730356 +0000 UTC m=+0.747012409,LastTimestamp:2026-03-14 08:57:27.774730356 +0000 UTC m=+0.747012409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.457942 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.805000774 +0000 UTC m=+0.777282867,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.462957 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.805030937 +0000 UTC m=+0.777313000,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.472717 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.805042605 +0000 UTC m=+0.777324668,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.474077 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.806322778 +0000 UTC m=+0.778604841,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.479232 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.806335455 +0000 UTC m=+0.778617518,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.485267 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.806347772 +0000 UTC m=+0.778629835,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.492809 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.806390063 +0000 UTC m=+0.778672126,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.497957 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.80640557 +0000 UTC m=+0.778687643,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.505067 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.806417767 +0000 UTC m=+0.778699830,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.509120 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.808150603 +0000 UTC m=+0.780432656,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.514038 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.808167449 +0000 UTC m=+0.780449502,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.520727 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.808176597 +0000 UTC m=+0.780458650,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.528979 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.808230176 +0000 UTC m=+0.780512229,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.535958 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.808242123 +0000 UTC m=+0.780524176,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.541623 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.808250611 +0000 UTC m=+0.780532664,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.548242 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.809251144 +0000 UTC m=+0.781533207,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.555458 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64d034\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64d034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691530292 +0000 UTC m=+0.663812345,LastTimestamp:2026-03-14 08:57:27.809266881 +0000 UTC m=+0.781548944,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.558808 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c64f04e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c64f04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.69153851 +0000 UTC m=+0.663820563,LastTimestamp:2026-03-14 08:57:27.809280169 +0000 UTC m=+0.781562232,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.563184 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.811194644 +0000 UTC m=+0.783476697,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.567004 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ca96f0c642ad9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ca96f0c642ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:27.691487961 +0000 UTC m=+0.663770014,LastTimestamp:2026-03-14 08:57:27.811209641 +0000 UTC m=+0.783491694,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.571294 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f2a22f7f1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.190531569 +0000 UTC m=+1.162813622,LastTimestamp:2026-03-14 08:57:28.190531569 +0000 UTC m=+1.162813622,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.574807 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f2b570899 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.210720921 +0000 UTC m=+1.183002974,LastTimestamp:2026-03-14 08:57:28.210720921 +0000 UTC m=+1.183002974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.576424 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f2b5a77ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.21094603 +0000 UTC m=+1.183228103,LastTimestamp:2026-03-14 08:57:28.21094603 +0000 UTC m=+1.183228103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.578734 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f2c4f9422 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.22700957 +0000 UTC m=+1.199291623,LastTimestamp:2026-03-14 08:57:28.22700957 +0000 UTC m=+1.199291623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.581255 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f2cbed0a8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.23429956 +0000 UTC m=+1.206581613,LastTimestamp:2026-03-14 08:57:28.23429956 +0000 UTC m=+1.206581613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.582705 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f4a06dfba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.725561274 +0000 UTC m=+1.697843327,LastTimestamp:2026-03-14 08:57:28.725561274 +0000 UTC m=+1.697843327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.587094 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f4a19c954 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.726800724 +0000 UTC m=+1.699082777,LastTimestamp:2026-03-14 08:57:28.726800724 +0000 UTC m=+1.699082777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.590769 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f4a22e79c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.7273983 +0000 UTC m=+1.699680353,LastTimestamp:2026-03-14 08:57:28.7273983 +0000 UTC m=+1.699680353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.594988 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f4a240693 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.727471763 +0000 UTC m=+1.699753836,LastTimestamp:2026-03-14 08:57:28.727471763 +0000 UTC m=+1.699753836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.599117 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f4a25fb28 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.727599912 +0000 UTC m=+1.699881965,LastTimestamp:2026-03-14 08:57:28.727599912 +0000 UTC m=+1.699881965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.603335 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f4ab659f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.737061369 +0000 UTC m=+1.709343422,LastTimestamp:2026-03-14 08:57:28.737061369 +0000 UTC m=+1.709343422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.607223 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f4ac86928 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.738244904 +0000 UTC m=+1.710526957,LastTimestamp:2026-03-14 08:57:28.738244904 +0000 UTC m=+1.710526957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.613681 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f4ad30eda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.738942682 +0000 UTC m=+1.711224735,LastTimestamp:2026-03-14 08:57:28.738942682 +0000 UTC m=+1.711224735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.618241 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f4ad4109a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.739008666 +0000 UTC m=+1.711290719,LastTimestamp:2026-03-14 08:57:28.739008666 +0000 UTC m=+1.711290719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.622990 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f4ad578a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.739100838 +0000 UTC m=+1.711382891,LastTimestamp:2026-03-14 08:57:28.739100838 +0000 UTC m=+1.711382891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.628253 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f4ad9565d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.739354205 +0000 UTC m=+1.711636258,LastTimestamp:2026-03-14 08:57:28.739354205 +0000 UTC m=+1.711636258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.632476 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f5a812964 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.00201098 +0000 UTC m=+1.974293023,LastTimestamp:2026-03-14 08:57:29.00201098 +0000 UTC m=+1.974293023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.636163 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f5b4480f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.01481292 +0000 UTC m=+1.987095003,LastTimestamp:2026-03-14 08:57:29.01481292 +0000 UTC m=+1.987095003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.640007 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f5b576bb4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.01605266 +0000 UTC m=+1.988334733,LastTimestamp:2026-03-14 08:57:29.01605266 +0000 UTC m=+1.988334733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: I0314 08:58:21.645853 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.645896 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f6a641685 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.268541061 +0000 UTC m=+2.240823114,LastTimestamp:2026-03-14 08:57:29.268541061 +0000 UTC m=+2.240823114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.649958 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f6b004afc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.278778108 +0000 UTC m=+2.251060161,LastTimestamp:2026-03-14 08:57:29.278778108 +0000 UTC m=+2.251060161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.653602 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f6b10d11e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.279861022 +0000 UTC m=+2.252143075,LastTimestamp:2026-03-14 08:57:29.279861022 +0000 UTC m=+2.252143075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.657760 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f757dcc1b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.454775323 +0000 UTC m=+2.427057376,LastTimestamp:2026-03-14 08:57:29.454775323 +0000 UTC m=+2.427057376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.662440 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f76487985 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.468057989 +0000 UTC m=+2.440340082,LastTimestamp:2026-03-14 08:57:29.468057989 +0000 UTC m=+2.440340082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.667107 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f8573aa36 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.722546742 +0000 UTC m=+2.694828805,LastTimestamp:2026-03-14 08:57:29.722546742 +0000 UTC m=+2.694828805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.672283 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f85c43c08 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.727826952 +0000 UTC m=+2.700109005,LastTimestamp:2026-03-14 08:57:29.727826952 +0000 UTC m=+2.700109005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.675801 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f85e1ee16 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.729773078 +0000 UTC m=+2.702055141,LastTimestamp:2026-03-14 08:57:29.729773078 +0000 UTC m=+2.702055141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.679742 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f8699ef4e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.741832014 +0000 UTC m=+2.714114067,LastTimestamp:2026-03-14 08:57:29.741832014 +0000 UTC m=+2.714114067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.682913 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f92297f3d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.935789885 +0000 UTC m=+2.908071938,LastTimestamp:2026-03-14 08:57:29.935789885 +0000 UTC m=+2.908071938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.684368 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f923d912a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.937105194 +0000 UTC m=+2.909387247,LastTimestamp:2026-03-14 08:57:29.937105194 +0000 UTC m=+2.909387247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.688587 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f92ace729 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.944401705 +0000 UTC m=+2.916683758,LastTimestamp:2026-03-14 08:57:29.944401705 +0000 UTC m=+2.916683758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.692256 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f92d165be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.946793406 +0000 UTC m=+2.919075459,LastTimestamp:2026-03-14 08:57:29.946793406 +0000 UTC m=+2.919075459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.695971 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f9306b43e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.95028691 +0000 UTC m=+2.922568963,LastTimestamp:2026-03-14 08:57:29.95028691 +0000 UTC m=+2.922568963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.699228 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96f931e4641 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.951831617 +0000 UTC m=+2.924113670,LastTimestamp:2026-03-14 08:57:29.951831617 +0000 UTC m=+2.924113670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.703604 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ca96f9367f22e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.956659758 +0000 UTC m=+2.928941811,LastTimestamp:2026-03-14 08:57:29.956659758 +0000 UTC m=+2.928941811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.709366 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f94c7d339 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.979720505 +0000 UTC m=+2.952002568,LastTimestamp:2026-03-14 08:57:29.979720505 +0000 UTC m=+2.952002568,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.712966 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96f94cc470b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.980012299 +0000 UTC m=+2.952294352,LastTimestamp:2026-03-14 08:57:29.980012299 +0000 UTC m=+2.952294352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.716331 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96f94d93380 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.980859264 +0000 UTC m=+2.953141317,LastTimestamp:2026-03-14 08:57:29.980859264 +0000 UTC m=+2.953141317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.719604 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96fa09d5f3a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.17826489 +0000 UTC m=+3.150546973,LastTimestamp:2026-03-14 08:57:30.17826489 +0000 UTC m=+3.150546973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.722579 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fa119bde9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.186415593 +0000 UTC m=+3.158697686,LastTimestamp:2026-03-14 08:57:30.186415593 +0000 UTC m=+3.158697686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.725932 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96fa46d198c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.242210188 +0000 UTC m=+3.214492251,LastTimestamp:2026-03-14 08:57:30.242210188 +0000 UTC m=+3.214492251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.729838 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96fa486d638 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.243896888 +0000 UTC m=+3.216178961,LastTimestamp:2026-03-14 08:57:30.243896888 +0000 UTC m=+3.216178961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.734212 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fa4eb336a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.250474346 +0000 UTC m=+3.222756419,LastTimestamp:2026-03-14 08:57:30.250474346 +0000 UTC m=+3.222756419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.738201 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fa50b0117 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.252558615 +0000 UTC m=+3.224840678,LastTimestamp:2026-03-14 08:57:30.252558615 +0000 UTC m=+3.224840678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.741712 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96fae58c057 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.408648791 +0000 UTC m=+3.380930844,LastTimestamp:2026-03-14 08:57:30.408648791 +0000 UTC m=+3.380930844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.747498 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fae80a604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.411263492 +0000 UTC m=+3.383545545,LastTimestamp:2026-03-14 08:57:30.411263492 +0000 UTC m=+3.383545545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.751069 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ca96faeeab6f0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.41821464 +0000 UTC m=+3.390496693,LastTimestamp:2026-03-14 08:57:30.41821464 +0000 UTC m=+3.390496693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.756499 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96faf79f8d6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.427603158 +0000 UTC m=+3.399885211,LastTimestamp:2026-03-14 08:57:30.427603158 +0000 UTC m=+3.399885211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.760034 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96faf8ffa5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.42904534 +0000 UTC m=+3.401327393,LastTimestamp:2026-03-14 08:57:30.42904534 +0000 UTC m=+3.401327393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.763790 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fb928df1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.590060314 +0000 UTC m=+3.562342367,LastTimestamp:2026-03-14 08:57:30.590060314 +0000 UTC m=+3.562342367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.767010 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fb9b44dd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.599198165 +0000 UTC m=+3.571480218,LastTimestamp:2026-03-14 08:57:30.599198165 +0000 UTC m=+3.571480218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.771823 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fb9c5ea75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.600352373 +0000 UTC m=+3.572634426,LastTimestamp:2026-03-14 08:57:30.600352373 +0000 UTC m=+3.572634426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.776006 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96fc2328684 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.74168794 +0000 UTC m=+3.713969993,LastTimestamp:2026-03-14 08:57:30.74168794 +0000 UTC m=+3.713969993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.780230 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fc4641bc0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.77849184 +0000 UTC m=+3.750773893,LastTimestamp:2026-03-14 08:57:30.77849184 +0000 UTC m=+3.750773893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.783807 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fc4f7fa22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.788182562 +0000 UTC m=+3.760464615,LastTimestamp:2026-03-14 08:57:30.788182562 +0000 UTC m=+3.760464615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.786990 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96fcb24dd12 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.891787538 +0000 UTC m=+3.864069591,LastTimestamp:2026-03-14 08:57:30.891787538 +0000 UTC m=+3.864069591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.790004 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96fcbb1e032 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.901028914 +0000 UTC m=+3.873310967,LastTimestamp:2026-03-14 08:57:30.901028914 +0000 UTC m=+3.873310967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.795892 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca96fff35916a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:31.765297514 +0000 UTC m=+4.737579587,LastTimestamp:2026-03-14 08:57:31.765297514 +0000 UTC m=+4.737579587,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.799501 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97008fb98bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:31.92927046 +0000 UTC m=+4.901552513,LastTimestamp:2026-03-14 08:57:31.92927046 +0000 UTC m=+4.901552513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.803371 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97009674d52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:31.936329042 +0000 UTC m=+4.908611095,LastTimestamp:2026-03-14 08:57:31.936329042 +0000 UTC m=+4.908611095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.808343 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9700973efc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:31.937157058 +0000 UTC m=+4.909439111,LastTimestamp:2026-03-14 08:57:31.937157058 +0000 UTC m=+4.909439111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.812214 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97011e78c42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.07895149 +0000 UTC m=+5.051233543,LastTimestamp:2026-03-14 08:57:32.07895149 +0000 UTC m=+5.051233543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.815423 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9701352d56a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.102759786 +0000 UTC m=+5.075041839,LastTimestamp:2026-03-14 08:57:32.102759786 +0000 UTC m=+5.075041839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.819297 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9701360186e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.10362891 +0000 UTC m=+5.075910963,LastTimestamp:2026-03-14 08:57:32.10362891 +0000 UTC m=+5.075910963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.823390 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9701d86abe3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.273929187 +0000 UTC m=+5.246211240,LastTimestamp:2026-03-14 08:57:32.273929187 +0000 UTC m=+5.246211240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.828034 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9701e25f8f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.284369138 +0000 UTC m=+5.256651181,LastTimestamp:2026-03-14 08:57:32.284369138 +0000 UTC m=+5.256651181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.832255 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9701e345604 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.285310468 +0000 UTC m=+5.257592521,LastTimestamp:2026-03-14 08:57:32.285310468 +0000 UTC m=+5.257592521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.836824 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97027d31db5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.446711221 +0000 UTC m=+5.418993274,LastTimestamp:2026-03-14 08:57:32.446711221 +0000 UTC m=+5.418993274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.841951 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97028a15d3a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.460227898 +0000 UTC m=+5.432509951,LastTimestamp:2026-03-14 08:57:32.460227898 +0000 UTC m=+5.432509951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.848026 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca97028b21bf2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.461325298 +0000 UTC m=+5.433607351,LastTimestamp:2026-03-14 08:57:32.461325298 +0000 UTC m=+5.433607351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.851817 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca9703244eeac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.621942444 +0000 UTC m=+5.594224507,LastTimestamp:2026-03-14 08:57:32.621942444 +0000 UTC m=+5.594224507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.855808 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ca970330cc88a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:32.635039882 +0000 UTC m=+5.607321935,LastTimestamp:2026-03-14 08:57:32.635039882 +0000 UTC m=+5.607321935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.867592 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca971c6c6a749 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:21 crc kubenswrapper[4869]: body: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:39.408439113 +0000 UTC m=+12.380721166,LastTimestamp:2026-03-14 08:57:39.408439113 +0000 UTC m=+12.380721166,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.872415 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca971c6c77be5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:39.408493541 +0000 UTC m=+12.380775594,LastTimestamp:2026-03-14 08:57:39.408493541 +0000 UTC m=+12.380775594,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.877190 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-apiserver-crc.189ca9723e04a902 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 08:58:21 crc kubenswrapper[4869]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:58:21 crc kubenswrapper[4869]: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:41.40899149 +0000 UTC m=+14.381273553,LastTimestamp:2026-03-14 08:57:41.40899149 +0000 UTC m=+14.381273553,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.885034 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9723e065135 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:41.409100085 +0000 UTC m=+14.381382148,LastTimestamp:2026-03-14 08:57:41.409100085 +0000 UTC m=+14.381382148,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.888955 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca9723e04a902\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-apiserver-crc.189ca9723e04a902 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 14 08:58:21 crc kubenswrapper[4869]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 14 08:58:21 crc kubenswrapper[4869]: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:41.40899149 +0000 UTC m=+14.381273553,LastTimestamp:2026-03-14 08:57:41.413425453 +0000 UTC m=+14.385707506,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.893272 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca9723e065135\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca9723e065135 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:41.409100085 +0000 UTC m=+14.381382148,LastTimestamp:2026-03-14 08:57:41.413475711 +0000 UTC m=+14.385757764,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.897838 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96fb9c5ea75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fb9c5ea75 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.600352373 +0000 UTC m=+3.572634426,LastTimestamp:2026-03-14 08:57:42.800595422 +0000 UTC m=+15.772877485,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.902557 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96fc4641bc0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fc4641bc0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.77849184 +0000 UTC m=+3.750773893,LastTimestamp:2026-03-14 08:57:42.934005964 +0000 UTC m=+15.906288027,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.907675 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ca96fc4f7fa22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ca96fc4f7fa22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:30.788182562 +0000 UTC m=+3.760464615,LastTimestamp:2026-03-14 08:57:42.943756002 +0000 UTC m=+15.916038045,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.914191 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acd3b28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:21 crc kubenswrapper[4869]: body: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.40809092 +0000 UTC m=+22.380372983,LastTimestamp:2026-03-14 08:57:49.40809092 +0000 UTC m=+22.380372983,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.918841 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acdfd75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.408140661 +0000 UTC m=+22.380422714,LastTimestamp:2026-03-14 08:57:49.408140661 +0000 UTC m=+22.380422714,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.923830 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9741acd3b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acd3b28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:21 crc kubenswrapper[4869]: body: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.40809092 +0000 UTC m=+22.380372983,LastTimestamp:2026-03-14 08:57:59.410058358 +0000 UTC m=+32.382340401,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.928866 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9741acdfd75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acdfd75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.408140661 +0000 UTC m=+22.380422714,LastTimestamp:2026-03-14 08:57:59.410188541 +0000 UTC m=+32.382470604,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.934707 4869 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9766f1b93af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:59.412446127 +0000 UTC m=+32.384728200,LastTimestamp:2026-03-14 08:57:59.412446127 +0000 UTC m=+32.384728200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.938662 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96f4ad578a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f4ad578a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:28.739100838 +0000 UTC m=+1.711382891,LastTimestamp:2026-03-14 08:57:59.598593445 +0000 UTC m=+32.570875498,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.943677 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96f5a812964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f5a812964 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.00201098 +0000 UTC m=+1.974293023,LastTimestamp:2026-03-14 08:57:59.767974634 +0000 UTC m=+32.740256697,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: I0314 08:58:21.947751 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.948176 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca96f5b4480f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca96f5b4480f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:29.01481292 +0000 UTC m=+1.987095003,LastTimestamp:2026-03-14 08:57:59.799383259 +0000 UTC m=+32.771665312,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.954770 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9741acd3b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acd3b28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:21 crc kubenswrapper[4869]: body: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.40809092 +0000 UTC m=+22.380372983,LastTimestamp:2026-03-14 08:58:09.40925964 +0000 UTC m=+42.381541693,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.958666 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9741acdfd75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acdfd75 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.408140661 +0000 UTC m=+22.380422714,LastTimestamp:2026-03-14 08:58:09.409354392 +0000 UTC m=+42.381636445,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 08:58:21 crc kubenswrapper[4869]: E0314 08:58:21.964937 4869 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ca9741acd3b28\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 14 08:58:21 crc kubenswrapper[4869]: &Event{ObjectMeta:{kube-controller-manager-crc.189ca9741acd3b28 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 14 08:58:21 crc kubenswrapper[4869]: body: Mar 14 08:58:21 crc kubenswrapper[4869]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 08:57:49.40809092 +0000 UTC m=+22.380372983,LastTimestamp:2026-03-14 08:58:19.408394417 +0000 UTC m=+52.380676470,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 14 08:58:21 crc kubenswrapper[4869]: > Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.436079 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.436234 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.437886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.437926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.437935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.642744 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:22 crc kubenswrapper[4869]: E0314 08:58:22.827786 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.832958 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.834369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.834407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.834418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:22 crc kubenswrapper[4869]: I0314 08:58:22.834446 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:22 crc kubenswrapper[4869]: E0314 08:58:22.838598 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.645604 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.848289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.848551 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.850058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.850090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.850100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:23 crc kubenswrapper[4869]: I0314 08:58:23.850685 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:58:23 crc kubenswrapper[4869]: E0314 08:58:23.850870 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:24 crc kubenswrapper[4869]: I0314 08:58:24.650157 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:25 crc kubenswrapper[4869]: I0314 08:58:25.647914 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:26 crc kubenswrapper[4869]: I0314 08:58:26.646313 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.647117 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:27 crc kubenswrapper[4869]: E0314 08:58:27.781421 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.827995 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.828300 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.829890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.829950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.829964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:27 crc kubenswrapper[4869]: I0314 08:58:27.830636 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:58:27 crc kubenswrapper[4869]: E0314 08:58:27.830891 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:28 crc kubenswrapper[4869]: I0314 08:58:28.649586 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.409740 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.409868 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.410003 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.410242 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.412125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.412238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.412396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.413689 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.413855 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c" gracePeriod=30 Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.649316 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:29 crc kubenswrapper[4869]: E0314 08:58:29.833544 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.839599 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.840990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.841020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.841031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.841055 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:29 crc kubenswrapper[4869]: E0314 08:58:29.845267 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.975461 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.976796 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.977316 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c" exitCode=255 Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.977366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c"} Mar 14 08:58:29 crc kubenswrapper[4869]: I0314 08:58:29.977415 4869 scope.go:117] "RemoveContainer" containerID="64c501a0f06b3c2c0485bf8bdb975073ae34bdb4cb26e9ed81a78ac5ff5a4f21" Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.648783 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.980953 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.981807 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"99b209c4c7b25dd6954b494380c36af2ec214e81fc357c40b8ad43d178533a6c"} Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.981931 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.982858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.982887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:30 crc kubenswrapper[4869]: I0314 08:58:30.982897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:31 crc kubenswrapper[4869]: I0314 08:58:31.645990 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:31 crc kubenswrapper[4869]: I0314 08:58:31.983967 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:31 crc kubenswrapper[4869]: I0314 08:58:31.984831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:31 crc kubenswrapper[4869]: I0314 08:58:31.984878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:31 crc kubenswrapper[4869]: I0314 08:58:31.984888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:32 crc kubenswrapper[4869]: I0314 08:58:32.646633 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:32 crc kubenswrapper[4869]: I0314 08:58:32.703420 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:32 crc kubenswrapper[4869]: I0314 08:58:32.704577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:32 crc kubenswrapper[4869]: I0314 08:58:32.704627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:32 crc kubenswrapper[4869]: I0314 08:58:32.704642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.645076 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.750103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.750315 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.752003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.752040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:33 crc kubenswrapper[4869]: I0314 08:58:33.752052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:34 crc kubenswrapper[4869]: I0314 08:58:34.646116 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:35 crc kubenswrapper[4869]: I0314 08:58:35.646047 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.408728 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.408922 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.410139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.410193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.410206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.412747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.644632 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:36 crc kubenswrapper[4869]: E0314 08:58:36.837724 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.846013 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.847281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.847324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.847334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.847362 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:36 crc kubenswrapper[4869]: E0314 08:58:36.851004 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.996387 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.997168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.997211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:36 crc kubenswrapper[4869]: I0314 08:58:36.997222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:37 crc kubenswrapper[4869]: I0314 08:58:37.648498 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:37 crc kubenswrapper[4869]: E0314 08:58:37.782046 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.645138 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.703142 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.704324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.704381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.704392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:38 crc kubenswrapper[4869]: I0314 08:58:38.705045 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:58:38 crc kubenswrapper[4869]: E0314 08:58:38.705231 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:39 crc kubenswrapper[4869]: I0314 08:58:39.369938 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 14 08:58:39 crc kubenswrapper[4869]: I0314 08:58:39.397124 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 08:58:39 crc kubenswrapper[4869]: I0314 08:58:39.644867 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:40 crc kubenswrapper[4869]: I0314 08:58:40.645470 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:41 crc kubenswrapper[4869]: I0314 08:58:41.647728 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:42 crc kubenswrapper[4869]: I0314 08:58:42.645066 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.645399 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.753953 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.754122 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.755308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.755460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.755577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:43 crc kubenswrapper[4869]: E0314 08:58:43.842773 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.852027 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.853153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.853318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.853404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:43 crc kubenswrapper[4869]: I0314 08:58:43.853503 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:43 crc kubenswrapper[4869]: E0314 08:58:43.857567 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 14 08:58:44 crc kubenswrapper[4869]: I0314 08:58:44.115941 4869 csr.go:261] certificate signing request csr-gzsw9 is approved, waiting to be issued Mar 14 08:58:44 crc kubenswrapper[4869]: I0314 08:58:44.125315 4869 csr.go:257] certificate signing request csr-gzsw9 is issued Mar 14 08:58:44 crc kubenswrapper[4869]: I0314 08:58:44.239264 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 14 08:58:44 crc kubenswrapper[4869]: I0314 08:58:44.478959 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 14 08:58:45 crc kubenswrapper[4869]: I0314 08:58:45.127471 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-30 13:00:10.063145605 +0000 UTC Mar 14 08:58:45 crc kubenswrapper[4869]: I0314 08:58:45.127570 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6988h1m24.935579495s for next certificate rotation Mar 14 08:58:47 crc kubenswrapper[4869]: E0314 08:58:47.783152 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.858479 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.859577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.859613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.859623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.859733 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.866414 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.866674 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.866696 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.869264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.869302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.869313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.869331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.869344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:50Z","lastTransitionTime":"2026-03-14T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.880151 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.885808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.885848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.885867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.885886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.885898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:50Z","lastTransitionTime":"2026-03-14T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.894265 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.899893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.899952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.899965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.899983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.899994 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:50Z","lastTransitionTime":"2026-03-14T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.907807 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.913377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.913420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.913437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.913456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:50 crc kubenswrapper[4869]: I0314 08:58:50.913469 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:50Z","lastTransitionTime":"2026-03-14T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.921349 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.921455 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:58:50 crc kubenswrapper[4869]: E0314 08:58:50.921477 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.022156 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.122339 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.222833 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.284338 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.323741 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.424341 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.524545 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.625164 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.702764 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.703897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.703935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.703953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:51 crc kubenswrapper[4869]: I0314 08:58:51.704603 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.704795 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.726018 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.826206 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:51 crc kubenswrapper[4869]: E0314 08:58:51.926749 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.027675 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.128175 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.229130 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.330067 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.431020 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.532126 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.633005 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.733599 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.833852 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:52 crc kubenswrapper[4869]: E0314 08:58:52.934208 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.035307 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.135454 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.236405 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.337295 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.438410 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.539553 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.640634 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.741685 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.842217 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:53 crc kubenswrapper[4869]: E0314 08:58:53.942313 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.042668 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.142832 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.243710 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.344288 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.445368 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.545782 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.646648 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.747265 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.847444 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:54 crc kubenswrapper[4869]: E0314 08:58:54.948048 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.048664 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.149545 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.250721 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.351847 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.452465 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.553539 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.653954 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.754856 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.856035 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:55 crc kubenswrapper[4869]: E0314 08:58:55.957152 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.057506 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.158548 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.259251 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.360393 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.460790 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.561902 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.662848 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.763291 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.863784 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:56 crc kubenswrapper[4869]: E0314 08:58:56.964908 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.065775 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.166207 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.267250 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.368019 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.468658 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.569284 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.669853 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.770687 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.783929 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.871278 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:57 crc kubenswrapper[4869]: E0314 08:58:57.971707 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.072483 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.173127 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.273554 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.374594 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.475573 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.576706 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.677400 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.778349 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.879555 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:58 crc kubenswrapper[4869]: E0314 08:58:58.980609 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.081166 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.181591 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.282451 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.383334 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.483649 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.584842 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.685650 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: E0314 08:58:59.786803 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.854619 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.889116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.889160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.889170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.889187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.889200 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:59Z","lastTransitionTime":"2026-03-14T08:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.992471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.992571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.992586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.992612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:58:59 crc kubenswrapper[4869]: I0314 08:58:59.992643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:58:59Z","lastTransitionTime":"2026-03-14T08:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.095073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.095136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.095146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.095167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.095179 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.162860 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.197892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.197947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.197961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.197981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.197997 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.301008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.301068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.301077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.301097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.301111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.404182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.404229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.404244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.404263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.404274 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.507348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.507384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.507395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.507409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.507420 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.610138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.610209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.610225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.610249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.610266 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.681718 4869 apiserver.go:52] "Watching apiserver" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.687461 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.687823 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.688250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.688961 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.689083 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.689222 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.689307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.689344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.689303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.689305 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.689437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.691899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693280 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693452 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693697 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693708 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.693782 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.695185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.712451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.712490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.712499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.712534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.712546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.714503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.727342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742534 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742845 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742870 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742894 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742915 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.742942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743030 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743145 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743173 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743245 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743368 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743415 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743492 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743571 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743594 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743643 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743743 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743791 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743808 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743881 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.743990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744015 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744038 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744100 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744148 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744170 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744250 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744297 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744327 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744281 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744572 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744620 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744633 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744718 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744745 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744772 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744822 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744846 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744865 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744902 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744952 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.744980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745004 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745106 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745233 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745289 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745342 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745369 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745424 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745528 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745556 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745578 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745604 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745735 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745761 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745813 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745909 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746036 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746095 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746128 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747715 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747753 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747813 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747838 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747864 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748012 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748193 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748725 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748752 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748951 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748976 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749030 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749084 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749168 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749479 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749565 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749594 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749704 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749763 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749799 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749829 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749858 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749907 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749931 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752209 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752452 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.753156 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.753426 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.753495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760299 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760430 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760470 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760559 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760699 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760841 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760905 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760951 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760985 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761021 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761365 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761474 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761610 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.761746 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762038 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762217 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.762679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.763010 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745627 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.745847 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746012 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746038 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746305 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746856 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746916 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.746913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747171 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.747616 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.748316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749298 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749788 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749801 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752739 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.752828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.753219 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.753257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.749977 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.754259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.754559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.754596 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.754754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.755488 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.755626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.755680 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.755834 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.755961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756169 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756303 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.756886 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.757312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.757352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.757437 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.757441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.757883 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.758179 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.758290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.758632 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.758985 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759025 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759118 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.763704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759593 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.759813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.760209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.763387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.764092 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765172 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765236 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.764356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765760 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.766053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.754314 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.766538 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767459 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.768358 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.768889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.769584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.769930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770220 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770317 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770389 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.770854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.771037 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.771195 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.771275 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.771412 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.773068 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.773128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.773628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.775195 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.775848 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.775943 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.774558 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.776434 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.776182 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.776206 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.776833 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.276806547 +0000 UTC m=+94.249088600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.776846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.777783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.777857 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.768305 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.778217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.778326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.764927 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.768073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767134 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.782744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.765130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.783629 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.283595315 +0000 UTC m=+94.255877388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789207 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789427 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789535 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789673 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789725 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.789825 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789941 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.789937 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.790003 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.289977403 +0000 UTC m=+94.262259466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790025 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790052 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790072 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790088 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790104 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790119 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.767608 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790389 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790669 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790695 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.790786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791018 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791259 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791289 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791312 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791331 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791410 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791439 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791466 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791491 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791540 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791562 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.791691 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.791866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792056 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792230 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.792382 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.793762 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.792465 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.793825 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.793841 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792491 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792787 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.792849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.793386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.793406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.793424 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.793917 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.794059 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.794152 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.294133586 +0000 UTC m=+94.266415649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.796980 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:01.296941495 +0000 UTC m=+94.269223718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.794784 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.795336 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.795586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797047 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.796243 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.796166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797096 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797139 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797182 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797204 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797228 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797245 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797259 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797274 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797288 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797301 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797314 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797329 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797388 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797404 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797418 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797433 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797445 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797458 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797470 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797482 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797493 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797543 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797556 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797566 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797577 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797588 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797602 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797615 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797625 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797636 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797648 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797659 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797670 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797681 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797692 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797704 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797714 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797728 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797738 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797749 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797760 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797771 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797781 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797793 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797804 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797815 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797825 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797836 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797846 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797859 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797870 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797880 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797891 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797901 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797913 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797923 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797932 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797943 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797953 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797963 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797973 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797981 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.797991 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798003 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798013 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798024 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798035 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798048 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798058 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798068 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798079 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798089 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798100 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798110 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798121 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798133 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798144 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798166 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798177 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798187 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798197 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798208 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798218 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798229 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798240 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798251 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798268 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798279 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798290 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798302 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798313 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798324 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798334 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798344 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798356 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798366 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798377 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798407 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798417 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798429 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798440 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798451 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798461 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.798471 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.795829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.799021 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.799545 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.799721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.800369 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.800644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.801070 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.801172 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.801248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.803728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.803794 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.803838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.804036 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.804112 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.804323 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.807774 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.808355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.808427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.809248 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.811288 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.815738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.815765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.815774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.815788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.815799 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.816503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.816486 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.816314 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.816879 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.816996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.817010 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.817279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.817475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.821047 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.821296 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.821593 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.821623 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.822125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.822128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.822622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.823145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.823318 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.823560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.823742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.823867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.824232 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.824503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.825887 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.836897 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.837620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.838405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899381 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899589 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899657 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899721 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899806 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899885 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.899979 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900035 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900086 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900137 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900195 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900247 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900297 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900354 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900410 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900463 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900547 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900609 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900661 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900714 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900764 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900814 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900889 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.900961 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901023 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901081 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901136 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901199 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901255 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901305 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901573 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901634 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901690 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901749 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901803 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901856 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901913 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.901967 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902041 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902112 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902165 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902221 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902277 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902334 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902389 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902442 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902495 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902564 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902624 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902676 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902736 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902788 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902846 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902899 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.902953 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903003 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903062 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903131 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903205 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903266 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903322 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903380 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903431 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903481 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903553 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903607 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903670 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903795 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.903852 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.918935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.918981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.918994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.919009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.919020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.981169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.981214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.981229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.981247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:00 crc kubenswrapper[4869]: I0314 08:59:00.981258 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:00Z","lastTransitionTime":"2026-03-14T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:00 crc kubenswrapper[4869]: E0314 08:59:00.995929 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.000285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.000324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.000338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.000358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.000371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.004681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.014708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.014567 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.020444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.020544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.020574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.020605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.020629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.022801 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.023142 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: source /etc/kubernetes/apiserver-url.env Mar 14 08:59:01 crc kubenswrapper[4869]: else Mar 14 08:59:01 crc kubenswrapper[4869]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 08:59:01 crc kubenswrapper[4869]: exit 1 Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.026571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 08:59:01 crc kubenswrapper[4869]: W0314 08:59:01.032626 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b82f1aaba3943c716d30e7dcf72c12c64689770b8437bdc45cfd200cf1e418ee WatchSource:0}: Error finding container b82f1aaba3943c716d30e7dcf72c12c64689770b8437bdc45cfd200cf1e418ee: Status 404 returned error can't find the container with id b82f1aaba3943c716d30e7dcf72c12c64689770b8437bdc45cfd200cf1e418ee Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.035017 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.036238 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.037569 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.042126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.042169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.042182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.042208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.042223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.043687 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f "/env/_master" ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: source "/env/_master" Mar 14 08:59:01 crc kubenswrapper[4869]: set +o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 08:59:01 crc kubenswrapper[4869]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 08:59:01 crc kubenswrapper[4869]: ho_enable="--enable-hybrid-overlay" Mar 14 08:59:01 crc kubenswrapper[4869]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 08:59:01 crc kubenswrapper[4869]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 08:59:01 crc kubenswrapper[4869]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-host=127.0.0.1 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-port=9743 \ Mar 14 08:59:01 crc kubenswrapper[4869]: ${ho_enable} \ Mar 14 08:59:01 crc kubenswrapper[4869]: --enable-interconnect \ Mar 14 08:59:01 crc kubenswrapper[4869]: --disable-approver \ Mar 14 08:59:01 crc kubenswrapper[4869]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --wait-for-kubernetes-api=200s \ Mar 14 08:59:01 crc kubenswrapper[4869]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --loglevel="${LOGLEVEL}" Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.046727 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f "/env/_master" ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: source "/env/_master" Mar 14 08:59:01 crc kubenswrapper[4869]: set +o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: Mar 14 08:59:01 crc kubenswrapper[4869]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --disable-webhook \ Mar 14 08:59:01 crc kubenswrapper[4869]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --loglevel="${LOGLEVEL}" Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.047954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.053481 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058457 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.058539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b82f1aaba3943c716d30e7dcf72c12c64689770b8437bdc45cfd200cf1e418ee"} Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.062320 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.063697 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.066966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"28278264513bd59bb44442162071c8ca1948f2fee5d6a7bfc5457666f70cd6c1"} Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.068376 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: source /etc/kubernetes/apiserver-url.env Mar 14 08:59:01 crc kubenswrapper[4869]: else Mar 14 08:59:01 crc kubenswrapper[4869]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 14 08:59:01 crc kubenswrapper[4869]: exit 1 Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.068302 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.068960 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.068812 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0480113ab5eae4adc8acaa3f47b5bca2eed9e00d727c4bc5bd59ac535d716d0c"} Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.069923 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.070175 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f "/env/_master" ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: source "/env/_master" Mar 14 08:59:01 crc kubenswrapper[4869]: set +o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 14 08:59:01 crc kubenswrapper[4869]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 14 08:59:01 crc kubenswrapper[4869]: ho_enable="--enable-hybrid-overlay" Mar 14 08:59:01 crc kubenswrapper[4869]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 14 08:59:01 crc kubenswrapper[4869]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 14 08:59:01 crc kubenswrapper[4869]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-host=127.0.0.1 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --webhook-port=9743 \ Mar 14 08:59:01 crc kubenswrapper[4869]: ${ho_enable} \ Mar 14 08:59:01 crc kubenswrapper[4869]: --enable-interconnect \ Mar 14 08:59:01 crc kubenswrapper[4869]: --disable-approver \ Mar 14 08:59:01 crc kubenswrapper[4869]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --wait-for-kubernetes-api=200s \ Mar 14 08:59:01 crc kubenswrapper[4869]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --loglevel="${LOGLEVEL}" Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.071252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.071335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.071390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.071445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.071500 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.072317 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.074583 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 08:59:01 crc kubenswrapper[4869]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 14 08:59:01 crc kubenswrapper[4869]: if [[ -f "/env/_master" ]]; then Mar 14 08:59:01 crc kubenswrapper[4869]: set -o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: source "/env/_master" Mar 14 08:59:01 crc kubenswrapper[4869]: set +o allexport Mar 14 08:59:01 crc kubenswrapper[4869]: fi Mar 14 08:59:01 crc kubenswrapper[4869]: Mar 14 08:59:01 crc kubenswrapper[4869]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 14 08:59:01 crc kubenswrapper[4869]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 14 08:59:01 crc kubenswrapper[4869]: --disable-webhook \ Mar 14 08:59:01 crc kubenswrapper[4869]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 14 08:59:01 crc kubenswrapper[4869]: --loglevel="${LOGLEVEL}" Mar 14 08:59:01 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 14 08:59:01 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.075682 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.084379 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.094631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.104254 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.115900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.126967 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.139739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.152237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.163142 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.173749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.173796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.173808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.173830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.173841 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.177414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.189928 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.203030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.277853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.278272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.278354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.278419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.278474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.308329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.308430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.308456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.308474 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.308497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308659 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308678 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308689 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308717 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308741 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.308725066 +0000 UTC m=+95.281007119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308880 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.308852199 +0000 UTC m=+95.281134432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308981 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308993 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.309046 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.309033204 +0000 UTC m=+95.281315427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.308996 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.309072 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.309099 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.309091085 +0000 UTC m=+95.281373138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:01 crc kubenswrapper[4869]: E0314 08:59:01.309938 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:02.309893085 +0000 UTC m=+95.282175158 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.381114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.381183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.381195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.381214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.381224 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.486007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.486075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.486095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.486119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.486144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.589099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.589150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.589162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.589179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.589192 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.692614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.692682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.692693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.692711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.692726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.706874 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.707548 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.709078 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.709735 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.710790 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.711297 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.711933 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.712939 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.713597 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.714559 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.715028 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.716116 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.716624 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.717106 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.718100 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.718657 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.719642 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.720075 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.720749 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.721801 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.722269 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.723257 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.723723 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.724777 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.725266 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.725919 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.727061 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.727616 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.728703 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.729223 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.730105 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.730227 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.731912 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.732637 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.733869 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.735686 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.736822 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.738283 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.739112 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.740427 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.740950 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.741971 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.742640 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.743639 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.744136 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.745121 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.745691 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.746817 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.747341 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.748266 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.748753 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.749702 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.750276 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.750776 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.800253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.800314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.800331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.800354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.800369 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.903925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.903965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.903978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.903995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:01 crc kubenswrapper[4869]: I0314 08:59:01.904010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:01Z","lastTransitionTime":"2026-03-14T08:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.007276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.007325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.007337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.007354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.007364 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.109598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.109647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.109660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.109678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.109689 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.211823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.211855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.211863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.211878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.211887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315029 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315219 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315298 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:04.315257809 +0000 UTC m=+97.287539852 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315319 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315366 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315387 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315401 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315424 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:04.315398572 +0000 UTC m=+97.287680625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315461 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315493 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315557 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315503 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:04.315484965 +0000 UTC m=+97.287767038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315597 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315607 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:04.315587157 +0000 UTC m=+97.287869210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.315670 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:04.315626078 +0000 UTC m=+97.287908341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.315837 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.418096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.418197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.418247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.418273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.418291 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.521541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.521597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.521615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.521640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.521656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.625279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.625357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.625376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.625404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.625423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.703757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.703801 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.703802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.703981 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.704748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:02 crc kubenswrapper[4869]: E0314 08:59:02.704959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.728984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.729087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.729148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.729176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.729196 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.833448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.833542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.833560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.833585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.833599 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.936831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.936901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.936923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.936960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:02 crc kubenswrapper[4869]: I0314 08:59:02.936986 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:02Z","lastTransitionTime":"2026-03-14T08:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.040087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.040129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.040140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.040156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.040169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.143998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.144044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.144055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.144073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.144112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.247936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.247995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.248012 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.248031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.248047 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.351333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.351418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.351434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.351487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.351536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.454741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.455243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.455423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.455598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.455773 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.558865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.558912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.558923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.558942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.558954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.662302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.662768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.662900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.663020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.663162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.765771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.766213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.766352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.766485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.766636 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.870103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.870184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.870204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.870238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.870258 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.973048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.973167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.973181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.973198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:03 crc kubenswrapper[4869]: I0314 08:59:03.973208 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:03Z","lastTransitionTime":"2026-03-14T08:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.075835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.075927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.075962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.076000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.076037 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.178179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.178225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.178234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.178250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.178265 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.280344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.280380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.280392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.280409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.280423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.335497 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.335671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.335703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.335726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.335757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335784 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335884 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:08.33586465 +0000 UTC m=+101.308146703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335890 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335908 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335906 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335920 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335947 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335996 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.336015 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.335956 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:08.335943532 +0000 UTC m=+101.308225585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.336093 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:08.336080606 +0000 UTC m=+101.308362749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.336109 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:08.336102596 +0000 UTC m=+101.308384749 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.336124 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:08.336115757 +0000 UTC m=+101.308397920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.383354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.383399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.383411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.383431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.383444 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.485916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.485967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.485985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.486006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.486020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.588808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.588853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.588866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.588884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.588896 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.690729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.690773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.690785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.690803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.690815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.703053 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.703075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.703075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.703185 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.703280 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:04 crc kubenswrapper[4869]: E0314 08:59:04.703335 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.792997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.793038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.793048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.793064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.793076 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.895897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.895981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.896006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.896031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.896048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.998829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.998869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.998888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.998904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:04 crc kubenswrapper[4869]: I0314 08:59:04.998916 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:04Z","lastTransitionTime":"2026-03-14T08:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.104124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.104164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.104174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.104702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.107025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.182041 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.209200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.209238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.209273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.209302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.209312 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.311686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.311718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.311726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.311740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.311749 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.414110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.414139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.414147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.414160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.414170 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.517624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.517944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.518030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.518130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.518221 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.621579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.621855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.621983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.622081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.622169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.724682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.724733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.724743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.724762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.724774 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.827090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.827149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.827166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.827185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.827201 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.930641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.930715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.930732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.930758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:05 crc kubenswrapper[4869]: I0314 08:59:05.930784 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:05Z","lastTransitionTime":"2026-03-14T08:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.034158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.034451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.034594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.034719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.034834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.137962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.139145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.139279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.139594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.139725 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.242942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.243203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.243284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.243388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.243548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.346242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.346469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.346551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.346623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.346679 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.448860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.448933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.448944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.448960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.448971 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.551842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.552193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.552325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.552469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.552643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.654840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.655061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.655122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.655183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.655250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.704807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.704942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.704998 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:06 crc kubenswrapper[4869]: E0314 08:59:06.705927 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:06 crc kubenswrapper[4869]: E0314 08:59:06.706099 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:06 crc kubenswrapper[4869]: E0314 08:59:06.706193 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.721730 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.723830 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.758704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.758749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.758771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.758800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.758823 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.860816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.860841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.860851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.860865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.860874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.963450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.963497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.963521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.963538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:06 crc kubenswrapper[4869]: I0314 08:59:06.963547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:06Z","lastTransitionTime":"2026-03-14T08:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.066835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.067171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.067180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.067194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.067204 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.089075 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.091188 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.091879 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.109990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.122503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.130737 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.140176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.151183 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.159096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.166064 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.169695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.169720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.169732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.169750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.169763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.272364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.272404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.272414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.272432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.272443 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.374650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.374697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.374709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.374729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.374744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.477262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.477321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.477338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.477366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.477392 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.581118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.581161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.581173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.581192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.581205 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.684852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.684914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.684926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.684947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.684961 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.716074 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.726257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.736172 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.747338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.758811 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.768859 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.779268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.788387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.788431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.788447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.788466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.788478 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.891345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.891389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.891400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.891419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.891433 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.994329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.994368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.994381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.994398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:07 crc kubenswrapper[4869]: I0314 08:59:07.994410 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:07Z","lastTransitionTime":"2026-03-14T08:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.095805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.095852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.095866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.095882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.095894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.199084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.199189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.199203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.199225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.199244 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.301523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.301590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.301611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.301632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.301645 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.372999 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.373107 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373123 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:16.373098986 +0000 UTC m=+109.345381039 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.373157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.373188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.373214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373229 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373241 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373253 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373275 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373292 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:16.37328366 +0000 UTC m=+109.345565713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373309 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:16.37329968 +0000 UTC m=+109.345581733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373339 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373377 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373396 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373409 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373430 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:16.373409233 +0000 UTC m=+109.345691286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.373446 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:16.373440644 +0000 UTC m=+109.345722687 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.403636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.403683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.403696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.403716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.403730 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.506218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.506264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.506274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.506289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.506300 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.608876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.608951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.608970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.608996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.609014 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.703542 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.703613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.703651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.703703 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.703766 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:08 crc kubenswrapper[4869]: E0314 08:59:08.703896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.711244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.711282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.711294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.711310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.711321 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.813854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.813889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.813899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.813914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.813923 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.916402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.916436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.916444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.916458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.916468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:08Z","lastTransitionTime":"2026-03-14T08:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.931468 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9csf6"] Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.931802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.935106 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.935258 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.935338 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.953888 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.963633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.974776 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.985527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:08 crc kubenswrapper[4869]: I0314 08:59:08.997740 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.009431 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.019837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.032474 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.079427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrxf\" (UniqueName: \"kubernetes.io/projected/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-kube-api-access-vqrxf\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.079475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-hosts-file\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.122548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.122617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.122628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.122647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.122661 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.180624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrxf\" (UniqueName: \"kubernetes.io/projected/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-kube-api-access-vqrxf\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.180677 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-hosts-file\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.180765 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-hosts-file\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.196243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrxf\" (UniqueName: \"kubernetes.io/projected/ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d-kube-api-access-vqrxf\") pod \"node-resolver-9csf6\" (UID: \"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\") " pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.225247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.225300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.225322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.225342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.225354 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.244571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9csf6" Mar 14 08:59:09 crc kubenswrapper[4869]: W0314 08:59:09.261407 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9b7f2a_6b0e_434c_8fc5_7736b07d8c1d.slice/crio-f5cba11188e1bfb7acb702d11c61a5ffd51148a7301726e93da70559a0287fda WatchSource:0}: Error finding container f5cba11188e1bfb7acb702d11c61a5ffd51148a7301726e93da70559a0287fda: Status 404 returned error can't find the container with id f5cba11188e1bfb7acb702d11c61a5ffd51148a7301726e93da70559a0287fda Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.285133 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lfk4t"] Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.285973 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jj985"] Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.286364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.286463 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9nncq"] Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.286648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.286758 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.289983 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.290761 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.290788 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.290819 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.290766 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291041 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291318 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291340 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.291796 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.294418 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.301232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.310105 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.319302 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.328823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.329423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.329432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.329447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.329456 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.332222 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.343113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.351343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.360303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.370635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.383550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.385865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-system-cni-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-kubelet\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386034 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-k8s-cni-cncf-io\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386053 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e08d1ace-1d27-4a7d-b08e-c245a103c56f-rootfs\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386072 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e08d1ace-1d27-4a7d-b08e-c245a103c56f-proxy-tls\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-bin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386123 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxc6\" (UniqueName: \"kubernetes.io/projected/3aedc0f3-51fe-492b-9337-02b2b6e38327-kube-api-access-tzxc6\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-hostroot\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386166 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-etc-kubernetes\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-os-release\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-multus\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgflb\" (UniqueName: \"kubernetes.io/projected/e08d1ace-1d27-4a7d-b08e-c245a103c56f-kube-api-access-bgflb\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386245 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cnibin\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386262 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-netns\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrwh\" (UniqueName: \"kubernetes.io/projected/7f2679ec-a6bd-483b-b5b5-4615e83942a6-kube-api-access-bvrwh\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386298 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-cni-binary-copy\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-cnibin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386422 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-multus-certs\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-system-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386591 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-daemon-config\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-os-release\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-socket-dir-parent\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-conf-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.386729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e08d1ace-1d27-4a7d-b08e-c245a103c56f-mcd-auth-proxy-config\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.392629 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.402602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.414647 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.425570 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.431139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.431163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.431172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.431186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.431196 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.436978 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.450489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.464278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.478755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-system-cni-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-kubelet\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487812 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-k8s-cni-cncf-io\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e08d1ace-1d27-4a7d-b08e-c245a103c56f-rootfs\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e08d1ace-1d27-4a7d-b08e-c245a103c56f-proxy-tls\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-bin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487907 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzxc6\" (UniqueName: \"kubernetes.io/projected/3aedc0f3-51fe-492b-9337-02b2b6e38327-kube-api-access-tzxc6\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-kubelet\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487944 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-hostroot\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-etc-kubernetes\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-system-cni-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.487989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-os-release\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-multus\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgflb\" (UniqueName: \"kubernetes.io/projected/e08d1ace-1d27-4a7d-b08e-c245a103c56f-kube-api-access-bgflb\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-etc-kubernetes\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cnibin\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488009 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-k8s-cni-cncf-io\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-netns\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488148 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e08d1ace-1d27-4a7d-b08e-c245a103c56f-rootfs\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-netns\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-os-release\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cnibin\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-multus\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrwh\" (UniqueName: \"kubernetes.io/projected/7f2679ec-a6bd-483b-b5b5-4615e83942a6-kube-api-access-bvrwh\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-var-lib-cni-bin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-cni-binary-copy\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488359 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-cnibin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-multus-certs\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-system-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-daemon-config\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-cnibin\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-os-release\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-socket-dir-parent\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-conf-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488664 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e08d1ace-1d27-4a7d-b08e-c245a103c56f-mcd-auth-proxy-config\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488837 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-socket-dir-parent\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488953 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.488981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-os-release\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-conf-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489080 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-host-run-multus-certs\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-system-cni-dir\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7f2679ec-a6bd-483b-b5b5-4615e83942a6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e08d1ace-1d27-4a7d-b08e-c245a103c56f-mcd-auth-proxy-config\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489594 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-cni-binary-copy\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.489633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3aedc0f3-51fe-492b-9337-02b2b6e38327-hostroot\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.490025 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3aedc0f3-51fe-492b-9337-02b2b6e38327-multus-daemon-config\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.490318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7f2679ec-a6bd-483b-b5b5-4615e83942a6-cni-binary-copy\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.491391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e08d1ace-1d27-4a7d-b08e-c245a103c56f-proxy-tls\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.491708 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.504260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzxc6\" (UniqueName: \"kubernetes.io/projected/3aedc0f3-51fe-492b-9337-02b2b6e38327-kube-api-access-tzxc6\") pod \"multus-9nncq\" (UID: \"3aedc0f3-51fe-492b-9337-02b2b6e38327\") " pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.505023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgflb\" (UniqueName: \"kubernetes.io/projected/e08d1ace-1d27-4a7d-b08e-c245a103c56f-kube-api-access-bgflb\") pod \"machine-config-daemon-jj985\" (UID: \"e08d1ace-1d27-4a7d-b08e-c245a103c56f\") " pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.506435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.511493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrwh\" (UniqueName: \"kubernetes.io/projected/7f2679ec-a6bd-483b-b5b5-4615e83942a6-kube-api-access-bvrwh\") pod \"multus-additional-cni-plugins-lfk4t\" (UID: \"7f2679ec-a6bd-483b-b5b5-4615e83942a6\") " pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.529139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.534285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.534327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.534339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.534357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.534370 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.604039 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.611488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9nncq" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.618583 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" Mar 14 08:59:09 crc kubenswrapper[4869]: W0314 08:59:09.623110 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aedc0f3_51fe_492b_9337_02b2b6e38327.slice/crio-0f7b7e5399b42be5945440d8837469bc3c3bec57d95ad8de7c666d44f9dd6c7e WatchSource:0}: Error finding container 0f7b7e5399b42be5945440d8837469bc3c3bec57d95ad8de7c666d44f9dd6c7e: Status 404 returned error can't find the container with id 0f7b7e5399b42be5945440d8837469bc3c3bec57d95ad8de7c666d44f9dd6c7e Mar 14 08:59:09 crc kubenswrapper[4869]: W0314 08:59:09.635973 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f2679ec_a6bd_483b_b5b5_4615e83942a6.slice/crio-89337fb30371f89f886ac9c69428eb7f0b45b64e7d55f4bf610fc3f265ef40f9 WatchSource:0}: Error finding container 89337fb30371f89f886ac9c69428eb7f0b45b64e7d55f4bf610fc3f265ef40f9: Status 404 returned error can't find the container with id 89337fb30371f89f886ac9c69428eb7f0b45b64e7d55f4bf610fc3f265ef40f9 Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.637957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.638011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.638026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.638046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.638059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.641384 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bhcmd"] Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.645277 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: W0314 08:59:09.647697 4869 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Mar 14 08:59:09 crc kubenswrapper[4869]: E0314 08:59:09.647852 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.648791 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.648821 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.649267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.649808 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.649862 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.651269 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.657617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.668644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.681054 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.693114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.702748 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.712320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.724057 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.734059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.742400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.742439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.742449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.742468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.742483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.745449 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.763645 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.775797 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.785620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv2d\" (UniqueName: \"kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.791990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792378 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.792401 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.846024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.846073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.846085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.846106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.846118 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893471 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893488 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.893969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894084 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894122 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.894387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895335 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895586 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tv2d\" (UniqueName: \"kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.895881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.896636 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.917019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tv2d\" (UniqueName: \"kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.948233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.948282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.948292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.948309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:09 crc kubenswrapper[4869]: I0314 08:59:09.948320 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:09Z","lastTransitionTime":"2026-03-14T08:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.051251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.051304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.051320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.051352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.051367 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.101050 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5" exitCode=0 Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.101128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.101163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerStarted","Data":"89337fb30371f89f886ac9c69428eb7f0b45b64e7d55f4bf610fc3f265ef40f9"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.103533 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerStarted","Data":"8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.103572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerStarted","Data":"0f7b7e5399b42be5945440d8837469bc3c3bec57d95ad8de7c666d44f9dd6c7e"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.107136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.107165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.107178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"98f0e947d773ff63892f79be798105f371b0179ff6cebdd5f7071c80396cc479"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.109282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9csf6" event={"ID":"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d","Type":"ContainerStarted","Data":"1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.109334 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9csf6" event={"ID":"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d","Type":"ContainerStarted","Data":"f5cba11188e1bfb7acb702d11c61a5ffd51148a7301726e93da70559a0287fda"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.121066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.133417 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.147528 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.154762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.154805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.154816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.154836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.154850 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.160416 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.175389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.189804 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.203355 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.217367 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.242181 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.253586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.258077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.258118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.258128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.258143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.258152 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.263991 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.273744 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.285765 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.300264 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.316742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.333081 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.346609 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.360006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.362470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.362518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.362531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.362547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.362557 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.369775 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.379713 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.392953 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.410263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.421364 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.430751 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.465049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.465083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.465092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.465106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.465116 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.567306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.567348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.567359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.567377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.567391 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.669972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.670026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.670037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.670053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.670067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.703162 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.703277 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.703172 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:10 crc kubenswrapper[4869]: E0314 08:59:10.703374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:10 crc kubenswrapper[4869]: E0314 08:59:10.703443 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:10 crc kubenswrapper[4869]: E0314 08:59:10.703586 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.772799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.772843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.772854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.772870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.772882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.787062 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.801600 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert\") pod \"ovnkube-node-bhcmd\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.874990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.875051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.875063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.875087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.875104 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.889296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:10 crc kubenswrapper[4869]: W0314 08:59:10.916242 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod489ada67_a888_460e_862c_cd59acc0c6fe.slice/crio-b246c71dba690ea83fd1de06ec26b68f5e94c2cd8987c710114d2b13571587ef WatchSource:0}: Error finding container b246c71dba690ea83fd1de06ec26b68f5e94c2cd8987c710114d2b13571587ef: Status 404 returned error can't find the container with id b246c71dba690ea83fd1de06ec26b68f5e94c2cd8987c710114d2b13571587ef Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.978339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.978406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.978422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.978445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:10 crc kubenswrapper[4869]: I0314 08:59:10.978461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:10Z","lastTransitionTime":"2026-03-14T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.081777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.081820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.081829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.081845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.081857 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.114388 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff" exitCode=0 Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.114466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.115414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"b246c71dba690ea83fd1de06ec26b68f5e94c2cd8987c710114d2b13571587ef"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.135300 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.154528 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.169588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.183630 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.184445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.184478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.184490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.184521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.184537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.196082 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.206011 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.229970 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.240961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.252956 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.272281 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.283079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.287533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.287582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.287593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.287611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.287623 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.293612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.313089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.313158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.313172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.313196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.313209 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.325249 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.329262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.329315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.329327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.329344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.329354 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.338366 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.341973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.342019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.342029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.342045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.342058 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.351095 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.354635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.354683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.354693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.354710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.354723 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.366081 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.370276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.370310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.370321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.370343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.370358 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.383863 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:11 crc kubenswrapper[4869]: E0314 08:59:11.384026 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.390484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.390541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.390552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.390565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.390574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.492949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.493462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.493478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.493502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.493536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.596298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.596361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.596377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.596398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.596413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.699319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.699367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.699378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.699395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.699406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.801925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.801962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.801973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.801988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.801998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.903991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.904041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.904050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.904064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:11 crc kubenswrapper[4869]: I0314 08:59:11.904074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:11Z","lastTransitionTime":"2026-03-14T08:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.006542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.006588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.006596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.006609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.006618 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.109343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.109387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.109399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.109415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.109426 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.120598 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536" exitCode=0 Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.120626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.122046 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" exitCode=0 Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.122077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.140268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.155049 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.162419 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.173313 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.184991 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.202202 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.212605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.212639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.212650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.212668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.212681 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.217015 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.229450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.242745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.256408 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.275668 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.290581 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.303274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.316020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.316065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.316078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.316098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.316111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.318298 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.330437 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.346933 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.361251 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.372979 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.381383 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.394718 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.410247 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.418525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.418565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.418575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.418591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.418604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.428231 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.440077 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.452983 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.522347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.522400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.522409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.522427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.522438 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.627804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.627852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.627862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.627882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.627895 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.703327 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.703393 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.703330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:12 crc kubenswrapper[4869]: E0314 08:59:12.703560 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:12 crc kubenswrapper[4869]: E0314 08:59:12.703618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:12 crc kubenswrapper[4869]: E0314 08:59:12.704000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.730888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.730933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.730943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.730957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.730968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.833022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.833077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.833092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.833106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.833119 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.936997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.937039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.937047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.937064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:12 crc kubenswrapper[4869]: I0314 08:59:12.937074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:12Z","lastTransitionTime":"2026-03-14T08:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.040230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.040272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.040283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.040298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.040311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.129374 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060" exitCode=0 Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.129407 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136644 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.136775 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.142670 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.143283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.143320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.143334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.143350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.143361 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.153901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.169867 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.189361 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.210593 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.220743 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.229938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.240333 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.247046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.247080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.247089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.247108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.247122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.249993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.262824 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.274675 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.285221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.349775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.349838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.349850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.349867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.349880 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.453414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.453483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.453500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.453555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.453577 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.557260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.557369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.557395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.557490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.557554 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.661425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.661475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.661489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.661528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.661545 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.763894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.763939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.763974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.763988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.763999 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.866892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.866926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.866939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.866955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.866968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.969258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.969290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.969298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.969311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:13 crc kubenswrapper[4869]: I0314 08:59:13.969319 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:13Z","lastTransitionTime":"2026-03-14T08:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.071153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.071199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.071209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.071223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.071232 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.141240 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a" exitCode=0 Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.141316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.150539 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.159136 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.170732 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.177951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.177998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.178010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.178029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.178261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.181754 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.192527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.205923 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.214344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.221773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.234003 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.246777 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.263981 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.276658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.281781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.281812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.281821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.281836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.281845 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.384112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.384154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.384162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.384176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.384186 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.486062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.486100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.486112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.486127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.486137 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.589620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.589653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.589662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.589676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.589685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.692600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.692641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.692652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.692665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.692675 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.703362 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.703460 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.703381 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:14 crc kubenswrapper[4869]: E0314 08:59:14.703907 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:14 crc kubenswrapper[4869]: E0314 08:59:14.703983 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:14 crc kubenswrapper[4869]: E0314 08:59:14.704278 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.796107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.796153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.796167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.796183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.796195 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.898557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.898627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.898645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.898663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:14 crc kubenswrapper[4869]: I0314 08:59:14.898680 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:14Z","lastTransitionTime":"2026-03-14T08:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.002126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.002188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.002205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.002224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.002236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.105146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.105187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.105197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.105211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.105221 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.145806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.145858 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.149552 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f2679ec-a6bd-483b-b5b5-4615e83942a6" containerID="75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097" exitCode=0 Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.149611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerDied","Data":"75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.163806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.168813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.178312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.186386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.197879 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.207287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.207339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.207350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.207369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.207381 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.211059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.228970 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.240433 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.252862 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.265454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.278027 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.289332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.308436 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.311201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.311234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.311243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.311261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.311273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.320381 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.330644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.343583 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.352748 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fr765"] Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.353229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.355106 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.360196 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.360388 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.360388 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.362854 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.367351 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.383483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.399667 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.412646 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.414151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.414179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.414188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.414204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.414215 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.425539 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.439526 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.451089 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.454693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gbvq\" (UniqueName: \"kubernetes.io/projected/6ba53a2e-898a-4ea2-b6c2-c9624c757416-kube-api-access-5gbvq\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.454795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba53a2e-898a-4ea2-b6c2-c9624c757416-host\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.454876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ba53a2e-898a-4ea2-b6c2-c9624c757416-serviceca\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.465230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.478058 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.490941 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.502660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.517074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.517182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.517198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.517210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.517220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.518065 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.532914 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.544592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.555980 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.556227 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ba53a2e-898a-4ea2-b6c2-c9624c757416-serviceca\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.556281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gbvq\" (UniqueName: \"kubernetes.io/projected/6ba53a2e-898a-4ea2-b6c2-c9624c757416-kube-api-access-5gbvq\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.556325 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba53a2e-898a-4ea2-b6c2-c9624c757416-host\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.556394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba53a2e-898a-4ea2-b6c2-c9624c757416-host\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.557690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6ba53a2e-898a-4ea2-b6c2-c9624c757416-serviceca\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.570692 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.582054 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gbvq\" (UniqueName: \"kubernetes.io/projected/6ba53a2e-898a-4ea2-b6c2-c9624c757416-kube-api-access-5gbvq\") pod \"node-ca-fr765\" (UID: \"6ba53a2e-898a-4ea2-b6c2-c9624c757416\") " pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.582439 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.603752 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.614818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.619530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.619556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.619566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.619581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.619591 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.627564 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.641419 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.668786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fr765" Mar 14 08:59:15 crc kubenswrapper[4869]: W0314 08:59:15.681918 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ba53a2e_898a_4ea2_b6c2_c9624c757416.slice/crio-2efc36a31b2ea1ed6e3e00352632390456c86ba6e37208585df8bdb15c1f11c8 WatchSource:0}: Error finding container 2efc36a31b2ea1ed6e3e00352632390456c86ba6e37208585df8bdb15c1f11c8: Status 404 returned error can't find the container with id 2efc36a31b2ea1ed6e3e00352632390456c86ba6e37208585df8bdb15c1f11c8 Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.721876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.721903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.721912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.721924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.721932 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.825273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.825306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.825317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.825337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.825350 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.927431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.927462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.927470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.927484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:15 crc kubenswrapper[4869]: I0314 08:59:15.927494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:15Z","lastTransitionTime":"2026-03-14T08:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.030000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.030042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.030053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.030074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.030087 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.132870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.132907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.132918 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.132932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.132941 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.168420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.174055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" event={"ID":"7f2679ec-a6bd-483b-b5b5-4615e83942a6","Type":"ContainerStarted","Data":"2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.176481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fr765" event={"ID":"6ba53a2e-898a-4ea2-b6c2-c9624c757416","Type":"ContainerStarted","Data":"9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.176634 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fr765" event={"ID":"6ba53a2e-898a-4ea2-b6c2-c9624c757416","Type":"ContainerStarted","Data":"2efc36a31b2ea1ed6e3e00352632390456c86ba6e37208585df8bdb15c1f11c8"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.182889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.187539 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.199068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.235862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.235903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.235914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.235931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.235942 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.249687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.268950 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.285745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.300414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.311739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.323379 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.334301 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.338258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.338322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.338336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.338358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.338371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.350189 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.364590 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.379125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.391163 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.404223 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.419359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.436626 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.440377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.440438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.440451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.440475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.440490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.454363 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.465774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.465908 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.465934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.465968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.465990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466132 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466234 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:32.466186706 +0000 UTC m=+125.438468759 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466263 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466283 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466337 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:32.466323099 +0000 UTC m=+125.438605142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466288 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466364 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466435 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 08:59:32.46637915 +0000 UTC m=+125.438661203 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466478 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:32.466468183 +0000 UTC m=+125.438750236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466489 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466534 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466546 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.466579 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:32.466569185 +0000 UTC m=+125.438851468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.467887 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.479865 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.490812 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.503085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.522356 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.535273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.543007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.543053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.543063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.543082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.543096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.552025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.564449 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.583400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.646404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.646455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.646463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.646480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.646491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.703794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.703796 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.704051 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.704098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.704229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:16 crc kubenswrapper[4869]: E0314 08:59:16.704359 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.750774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.750820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.750832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.750849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.750860 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.853775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.853850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.853914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.853982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.854007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.957048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.957082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.957094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.957110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:16 crc kubenswrapper[4869]: I0314 08:59:16.957121 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:16Z","lastTransitionTime":"2026-03-14T08:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.063241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.063291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.063304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.063324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.063341 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.179691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.180118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.180142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.180170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.180188 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.284446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.284497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.284548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.284570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.284585 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.387306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.387349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.387371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.387392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.387404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.490307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.490349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.490361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.490377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.490387 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.592944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.593006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.593017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.593038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.593051 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.695621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.695666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.695675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.695690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.695701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.717741 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.729961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.746174 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.760761 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.772945 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.785159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.797237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.797269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.797282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.797302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.797318 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.806782 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.822963 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.856819 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.857774 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.874133 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.886820 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.898565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.900102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.900135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.900145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.900161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.900178 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:17Z","lastTransitionTime":"2026-03-14T08:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.910777 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.923089 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.933827 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.947282 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.963693 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.982299 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:17 crc kubenswrapper[4869]: I0314 08:59:17.996691 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:17Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.002635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.002681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.002705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.002722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.002735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.010188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.021458 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.038975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.056610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.067961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.081395 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.095915 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.106120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.106160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.106170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.106184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.106194 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.195144 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.195547 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.208845 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.222651 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.239742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.252617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.259009 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.263901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.275330 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.292739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.303831 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.311785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.311834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.311846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.311864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.311878 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.314652 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.326534 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.338536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.350580 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.365226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.376538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.387051 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.395562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.406453 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.413490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.413563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.413576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.413593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.413604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.419483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.434909 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.444946 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.458463 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.471362 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.485401 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.499338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.512334 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.516440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.516484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.516494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.516526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.516536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.531714 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:18Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.621063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.621115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.621128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.621166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.621179 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.703237 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:18 crc kubenswrapper[4869]: E0314 08:59:18.703370 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.703268 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:18 crc kubenswrapper[4869]: E0314 08:59:18.703453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.703245 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:18 crc kubenswrapper[4869]: E0314 08:59:18.703561 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.724259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.724314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.724328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.724351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.724363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.828016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.828077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.828090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.828114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.828130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.931875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.931954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.931984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.932019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:18 crc kubenswrapper[4869]: I0314 08:59:18.932043 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:18Z","lastTransitionTime":"2026-03-14T08:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.035315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.035379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.035397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.035422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.035440 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.137947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.137992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.138003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.138019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.138029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.199205 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.199247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.220236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.236072 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.240365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.240558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.240674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.240818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.240909 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.267964 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.285009 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.306586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.320482 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.341185 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.343875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.343917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.343928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.343949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.343962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.357600 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.372418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.383475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.395487 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.407662 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.424454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.439565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:19Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.446266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.446422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.446574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.446724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.446754 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.549158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.549218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.549228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.549246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.549256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.651989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.652037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.652054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.652071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.652082 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.755058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.755116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.755130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.755150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.755163 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.857490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.857539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.857550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.857564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.857573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.960436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.960477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.960485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.960502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:19 crc kubenswrapper[4869]: I0314 08:59:19.960526 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:19Z","lastTransitionTime":"2026-03-14T08:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.067041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.067082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.067092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.067107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.067118 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.169004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.169084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.169095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.169112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.169123 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.202398 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/0.log" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.204873 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc" exitCode=1 Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.204906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.205871 4869 scope.go:117] "RemoveContainer" containerID="1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.219819 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.239026 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.253212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.267993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.272674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.272709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.272721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.272735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.272744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.280687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.293168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.306482 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.319481 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.344775 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:20Z\\\",\\\"message\\\":\\\"40\\\\nI0314 08:59:20.164878 6755 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0314 08:59:20.165598 6755 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:20.165641 6755 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:20.165645 6755 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:20.165655 6755 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:20.165661 6755 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:20.165671 6755 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:20.165699 6755 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:20.165703 6755 factory.go:656] Stopping watch factory\\\\nI0314 08:59:20.165721 6755 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:20.165728 6755 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:20.165730 6755 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:20.165733 6755 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:20.165739 6755 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.354869 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.366338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.374898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.374930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.374939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.374954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.374963 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.380747 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.390674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:20Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.477274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.477315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.477328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.477344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.477355 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.579366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.579412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.579423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.579436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.579445 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.682047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.682121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.682130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.682142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.682151 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.703392 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.703410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:20 crc kubenswrapper[4869]: E0314 08:59:20.703643 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:20 crc kubenswrapper[4869]: E0314 08:59:20.703533 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.703410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:20 crc kubenswrapper[4869]: E0314 08:59:20.703711 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.784372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.784424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.784437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.784456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.784470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.887475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.887548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.887558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.887573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.887590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.989100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.989146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.989157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.989172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:20 crc kubenswrapper[4869]: I0314 08:59:20.989181 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:20Z","lastTransitionTime":"2026-03-14T08:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.092270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.092330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.092342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.092364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.092378 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.195740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.195801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.195816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.195839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.195854 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.211600 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/0.log" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.215820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.215874 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62"] Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.216360 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.216440 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.218623 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.219212 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.235552 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.252738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.268645 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.282983 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.297814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.297866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.297877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.297895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.297906 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.299820 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.319241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:20Z\\\",\\\"message\\\":\\\"40\\\\nI0314 08:59:20.164878 6755 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0314 08:59:20.165598 6755 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:20.165641 6755 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:20.165645 6755 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:20.165655 6755 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:20.165661 6755 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:20.165671 6755 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:20.165699 6755 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:20.165703 6755 factory.go:656] Stopping watch factory\\\\nI0314 08:59:20.165721 6755 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:20.165728 6755 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:20.165730 6755 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:20.165733 6755 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:20.165739 6755 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.322382 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.322444 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zwg\" (UniqueName: \"kubernetes.io/projected/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-kube-api-access-84zwg\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.322542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.322575 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.335315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.350411 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.366137 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.381757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.401141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.401197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.401211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.401234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.401248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.405021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.422832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.422955 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.422989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84zwg\" (UniqueName: \"kubernetes.io/projected/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-kube-api-access-84zwg\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.423027 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.423046 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.423572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.423793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.431347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.442400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.445246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84zwg\" (UniqueName: \"kubernetes.io/projected/eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff-kube-api-access-84zwg\") pod \"ovnkube-control-plane-749d76644c-s2v62\" (UID: \"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.459907 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.480847 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:20Z\\\",\\\"message\\\":\\\"40\\\\nI0314 08:59:20.164878 6755 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0314 08:59:20.165598 6755 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:20.165641 6755 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:20.165645 6755 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:20.165655 6755 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:20.165661 6755 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:20.165671 6755 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:20.165699 6755 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:20.165703 6755 factory.go:656] Stopping watch factory\\\\nI0314 08:59:20.165721 6755 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:20.165728 6755 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:20.165730 6755 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:20.165733 6755 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:20.165739 6755 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.495550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.504435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.504486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.504499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.504538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.504552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.514609 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.533875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.538205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.551896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: W0314 08:59:21.554696 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaac38bd_f9ff_4eee_89ab_2b2ef4cf57ff.slice/crio-ca298834ea1624287005c3b626bce33d608bf3bb2c7bf51759d2252f80bd72d2 WatchSource:0}: Error finding container ca298834ea1624287005c3b626bce33d608bf3bb2c7bf51759d2252f80bd72d2: Status 404 returned error can't find the container with id ca298834ea1624287005c3b626bce33d608bf3bb2c7bf51759d2252f80bd72d2 Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.566037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.582252 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.595444 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.606754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.606794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.606805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.606821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.606830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.609773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.624525 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.641500 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.657932 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.674373 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.709421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.709466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.709478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.709495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.709525 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.717914 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.771757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.771810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.771821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.771841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.771852 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.790179 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.796734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.796774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.796788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.796807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.796821 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.812957 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.818450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.818494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.818532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.818559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.818577 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.834128 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.839680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.839731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.839746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.839766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.839779 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.855884 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.861095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.861145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.861157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.861176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.861187 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.874810 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.874958 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.876654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.876697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.876711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.876728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.876740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.972148 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n77vq"] Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.972810 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:21 crc kubenswrapper[4869]: E0314 08:59:21.972891 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.979234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.979277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.979289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.979306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.979317 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:21Z","lastTransitionTime":"2026-03-14T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:21 crc kubenswrapper[4869]: I0314 08:59:21.985995 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.001072 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:21Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.015604 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.037471 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:20Z\\\",\\\"message\\\":\\\"40\\\\nI0314 08:59:20.164878 6755 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0314 08:59:20.165598 6755 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:20.165641 6755 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:20.165645 6755 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:20.165655 6755 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:20.165661 6755 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:20.165671 6755 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:20.165699 6755 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:20.165703 6755 factory.go:656] Stopping watch factory\\\\nI0314 08:59:20.165721 6755 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:20.165728 6755 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:20.165730 6755 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:20.165733 6755 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:20.165739 6755 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.049046 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.067026 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.080360 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.082158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.082297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.082326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.082368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.082398 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.094372 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.122222 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.133704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49vv\" (UniqueName: \"kubernetes.io/projected/0b5b025a-d78e-4728-b492-19846b3ad862-kube-api-access-t49vv\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.134053 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.138410 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.158356 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.177857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.184257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.184449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.184556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.184645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.184735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.194239 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.206594 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.221176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" event={"ID":"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff","Type":"ContainerStarted","Data":"d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.221246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" event={"ID":"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff","Type":"ContainerStarted","Data":"1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.221262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" event={"ID":"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff","Type":"ContainerStarted","Data":"ca298834ea1624287005c3b626bce33d608bf3bb2c7bf51759d2252f80bd72d2"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.223643 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/1.log" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.224374 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/0.log" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.227319 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839" exitCode=1 Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.228106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.228151 4869 scope.go:117] "RemoveContainer" containerID="1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.228459 4869 scope.go:117] "RemoveContainer" containerID="76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.228660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.229869 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.235549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.235648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t49vv\" (UniqueName: \"kubernetes.io/projected/0b5b025a-d78e-4728-b492-19846b3ad862-kube-api-access-t49vv\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.235773 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.235887 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:22.735861958 +0000 UTC m=+115.708144021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.246350 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.254022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t49vv\" (UniqueName: \"kubernetes.io/projected/0b5b025a-d78e-4728-b492-19846b3ad862-kube-api-access-t49vv\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.261906 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.274918 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.288211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.288261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.288273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.288296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.288307 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.289450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.306999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.329863 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be3a432c407c3281859e83801bf267a38821f692b98a6e7ac8ab4e8aaa5edcc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:20Z\\\",\\\"message\\\":\\\"40\\\\nI0314 08:59:20.164878 6755 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0314 08:59:20.165598 6755 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:20.165641 6755 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0314 08:59:20.165645 6755 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0314 08:59:20.165655 6755 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0314 08:59:20.165661 6755 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0314 08:59:20.165671 6755 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0314 08:59:20.165699 6755 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0314 08:59:20.165703 6755 factory.go:656] Stopping watch factory\\\\nI0314 08:59:20.165721 6755 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:20.165728 6755 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:20.165730 6755 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:20.165733 6755 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:20.165739 6755 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0314 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.341421 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.354448 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.369201 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.380565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.390546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.390588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.390601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.390622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.390635 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.403576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.418798 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.431870 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.446046 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.461783 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.475317 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.488826 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:22Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.493549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.493589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.493605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.493626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.493640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.595465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.595517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.595527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.595540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.595549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.698051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.698431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.698577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.698714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.698824 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.703296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.703375 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.703451 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.703562 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.703793 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.703963 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.741672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.741795 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:22 crc kubenswrapper[4869]: E0314 08:59:22.742159 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:23.742138094 +0000 UTC m=+116.714420167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.801786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.801841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.801858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.801877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.801889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.904934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.904989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.905003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.905023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:22 crc kubenswrapper[4869]: I0314 08:59:22.905037 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:22Z","lastTransitionTime":"2026-03-14T08:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.006857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.006895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.006903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.006917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.006927 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.109840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.110303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.110426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.110606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.110747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.214197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.214250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.214259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.214277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.214289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.232673 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/1.log" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.236588 4869 scope.go:117] "RemoveContainer" containerID="76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839" Mar 14 08:59:23 crc kubenswrapper[4869]: E0314 08:59:23.236748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.255908 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.280630 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.294979 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.306450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.316293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.316325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.316334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.316352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.316360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.321320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.334856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.359724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.371047 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.389605 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.404399 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.417392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.418893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.418925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.418936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.418953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.418966 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.431392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.444543 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.456672 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.470557 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.480453 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:23Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.521420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.521483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.521491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.521522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.521531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.623613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.623653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.623663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.623680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.623693 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.703589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:23 crc kubenswrapper[4869]: E0314 08:59:23.703724 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.726789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.726879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.726905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.726940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.726960 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.752828 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:23 crc kubenswrapper[4869]: E0314 08:59:23.752955 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:23 crc kubenswrapper[4869]: E0314 08:59:23.753029 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:25.753009814 +0000 UTC m=+118.725291887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.829482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.829535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.829546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.829561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.829573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.932196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.932419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.932428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.932441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:23 crc kubenswrapper[4869]: I0314 08:59:23.932450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:23Z","lastTransitionTime":"2026-03-14T08:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.034911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.034958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.034966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.034981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.034990 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.137497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.137558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.137575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.137591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.137600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.239676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.239745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.239764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.239789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.239806 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.343166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.343219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.343232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.343249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.343264 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.445780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.445871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.445891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.445939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.445958 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.548721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.548769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.548783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.548810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.548824 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.652064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.652137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.652156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.652183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.652202 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.702938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:24 crc kubenswrapper[4869]: E0314 08:59:24.703079 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.703215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.703249 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:24 crc kubenswrapper[4869]: E0314 08:59:24.703369 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:24 crc kubenswrapper[4869]: E0314 08:59:24.703485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.755255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.755305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.755322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.755342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.755358 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.858913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.858981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.859000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.859043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.859067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.962622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.962664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.962672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.962690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:24 crc kubenswrapper[4869]: I0314 08:59:24.962701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:24Z","lastTransitionTime":"2026-03-14T08:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.065995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.066057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.066073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.066096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.066112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.170083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.170152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.170169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.170203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.170219 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.273361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.273465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.273484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.273535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.273555 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.376288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.376332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.376342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.376361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.376374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.479634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.479684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.479694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.479727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.479737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.582387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.582998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.583145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.583300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.583449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.686895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.687322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.687418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.687576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.687656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.702904 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:25 crc kubenswrapper[4869]: E0314 08:59:25.703092 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.777933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:25 crc kubenswrapper[4869]: E0314 08:59:25.778324 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:25 crc kubenswrapper[4869]: E0314 08:59:25.778451 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:29.778435526 +0000 UTC m=+122.750717579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.790205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.790401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.790490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.790607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.790689 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.892865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.893252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.893531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.893603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.893678 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.996082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.996411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.996498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.996598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:25 crc kubenswrapper[4869]: I0314 08:59:25.996671 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:25Z","lastTransitionTime":"2026-03-14T08:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.099301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.099339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.099350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.099366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.099379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.201073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.201100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.201109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.201122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.201159 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.303650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.303719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.303730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.303750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.303763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.406944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.408078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.408351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.408598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.408871 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.511835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.512310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.512538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.512825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.513046 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.615641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.615944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.616047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.616150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.616225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.702777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.702819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.702785 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:26 crc kubenswrapper[4869]: E0314 08:59:26.703021 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:26 crc kubenswrapper[4869]: E0314 08:59:26.703052 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:26 crc kubenswrapper[4869]: E0314 08:59:26.703214 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.718598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.718686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.718710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.718741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.718761 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.822261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.822293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.822302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.822315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.822323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.925554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.925636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.925665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.925696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:26 crc kubenswrapper[4869]: I0314 08:59:26.925720 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:26Z","lastTransitionTime":"2026-03-14T08:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.029908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.030458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.030629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.030723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.030815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.134466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.135000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.135193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.135345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.135504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.238075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.238502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.238752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.239087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.239281 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.342626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.342693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.342710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.342736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.342755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.444854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.444908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.444920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.444939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.444953 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.548964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.549325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.549464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.549561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.549661 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:27Z","lastTransitionTime":"2026-03-14T08:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:27 crc kubenswrapper[4869]: E0314 08:59:27.650877 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.703614 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:27 crc kubenswrapper[4869]: E0314 08:59:27.703752 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.716071 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.727840 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.738135 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.750158 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.769041 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.779633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.790593 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: E0314 08:59:27.799583 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.803218 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.813832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.830745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.841969 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.855359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.866320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.878149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.888213 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:27 crc kubenswrapper[4869]: I0314 08:59:27.901446 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:27Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:28 crc kubenswrapper[4869]: I0314 08:59:28.703285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:28 crc kubenswrapper[4869]: I0314 08:59:28.703364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:28 crc kubenswrapper[4869]: E0314 08:59:28.703402 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:28 crc kubenswrapper[4869]: I0314 08:59:28.703415 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:28 crc kubenswrapper[4869]: E0314 08:59:28.703493 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:28 crc kubenswrapper[4869]: E0314 08:59:28.703571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:29 crc kubenswrapper[4869]: I0314 08:59:29.703575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:29 crc kubenswrapper[4869]: E0314 08:59:29.703743 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:29 crc kubenswrapper[4869]: I0314 08:59:29.820238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:29 crc kubenswrapper[4869]: E0314 08:59:29.820349 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:29 crc kubenswrapper[4869]: E0314 08:59:29.820406 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:37.820391577 +0000 UTC m=+130.792673630 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:30 crc kubenswrapper[4869]: I0314 08:59:30.703849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:30 crc kubenswrapper[4869]: I0314 08:59:30.703879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:30 crc kubenswrapper[4869]: I0314 08:59:30.703870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:30 crc kubenswrapper[4869]: E0314 08:59:30.704003 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:30 crc kubenswrapper[4869]: E0314 08:59:30.704098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:30 crc kubenswrapper[4869]: E0314 08:59:30.704182 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.703343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.703598 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.892321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.892377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.892395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.892418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.892439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:31Z","lastTransitionTime":"2026-03-14T08:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.907703 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:31Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.913381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.913453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.913480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.913540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.913567 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:31Z","lastTransitionTime":"2026-03-14T08:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.933757 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:31Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.938117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.938162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.938177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.938196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.938211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:31Z","lastTransitionTime":"2026-03-14T08:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.954072 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:31Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.957871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.957906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.957914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.957928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.957938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:31Z","lastTransitionTime":"2026-03-14T08:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.971327 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:31Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.975816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.975879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.975894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.975912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:31 crc kubenswrapper[4869]: I0314 08:59:31.975924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:31Z","lastTransitionTime":"2026-03-14T08:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.992443 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:31Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:31 crc kubenswrapper[4869]: E0314 08:59:31.992591 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.551311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.551630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.551713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.551807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.551892 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:00:04.551851591 +0000 UTC m=+157.524133684 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.551954 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.551979 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552045 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:00:04.552020485 +0000 UTC m=+157.524302538 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552041 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552089 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552100 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:00:04.552067286 +0000 UTC m=+157.524349369 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552111 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552151 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552178 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552188 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 09:00:04.552165059 +0000 UTC m=+157.524447142 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.551976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552201 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.552287 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 09:00:04.552280032 +0000 UTC m=+157.524562085 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.703549 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.703720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.703575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.703824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:32 crc kubenswrapper[4869]: I0314 08:59:32.703549 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.703896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:32 crc kubenswrapper[4869]: E0314 08:59:32.801838 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:33 crc kubenswrapper[4869]: I0314 08:59:33.703373 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:33 crc kubenswrapper[4869]: E0314 08:59:33.704750 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:34 crc kubenswrapper[4869]: I0314 08:59:34.703759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:34 crc kubenswrapper[4869]: I0314 08:59:34.703777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:34 crc kubenswrapper[4869]: I0314 08:59:34.703776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:34 crc kubenswrapper[4869]: E0314 08:59:34.704035 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:34 crc kubenswrapper[4869]: E0314 08:59:34.704147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:34 crc kubenswrapper[4869]: E0314 08:59:34.704293 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:35 crc kubenswrapper[4869]: I0314 08:59:35.703614 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:35 crc kubenswrapper[4869]: E0314 08:59:35.703900 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:36 crc kubenswrapper[4869]: I0314 08:59:36.703600 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:36 crc kubenswrapper[4869]: I0314 08:59:36.703689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:36 crc kubenswrapper[4869]: I0314 08:59:36.703600 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:36 crc kubenswrapper[4869]: E0314 08:59:36.703824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:36 crc kubenswrapper[4869]: E0314 08:59:36.703921 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:36 crc kubenswrapper[4869]: E0314 08:59:36.704070 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.703655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:37 crc kubenswrapper[4869]: E0314 08:59:37.704275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.704675 4869 scope.go:117] "RemoveContainer" containerID="76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.723776 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.739918 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.753401 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.766688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.781950 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: E0314 08:59:37.802279 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.802879 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.813578 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.824125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.834329 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.844377 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.857307 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.870439 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.882967 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.903402 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.914587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:37 crc kubenswrapper[4869]: E0314 08:59:37.914764 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:37 crc kubenswrapper[4869]: E0314 08:59:37.914821 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 08:59:53.914802972 +0000 UTC m=+146.887085025 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.917392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:37 crc kubenswrapper[4869]: I0314 08:59:37.929242 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:37Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.291702 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/1.log" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.293995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5"} Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.294363 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.312661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.325170 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.334916 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.344059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.357204 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.371437 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.386066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.398034 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.408721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.424965 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.434418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.443674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.457222 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.467372 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.477481 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.486711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:38Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.703290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:38 crc kubenswrapper[4869]: E0314 08:59:38.703411 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.703290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:38 crc kubenswrapper[4869]: E0314 08:59:38.703468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:38 crc kubenswrapper[4869]: I0314 08:59:38.703303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:38 crc kubenswrapper[4869]: E0314 08:59:38.703614 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.298386 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/2.log" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.298992 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/1.log" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.301738 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" exitCode=1 Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.301784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5"} Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.301835 4869 scope.go:117] "RemoveContainer" containerID="76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.302545 4869 scope.go:117] "RemoveContainer" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" Mar 14 08:59:39 crc kubenswrapper[4869]: E0314 08:59:39.302759 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.313207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.331039 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.344824 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.364358 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76d54dcf90805b73cf2e9fbe4320240bdd8b35f693e565902134f35d747d4839\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"Node event handler 7 for removal\\\\nI0314 08:59:21.253553 6875 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0314 08:59:21.253704 6875 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0314 08:59:21.253749 6875 handler.go:208] Removed *v1.Node event handler 2\\\\nI0314 08:59:21.253786 6875 handler.go:208] Removed *v1.Node event handler 7\\\\nI0314 08:59:21.253812 6875 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.253883 6875 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0314 08:59:21.253915 6875 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0314 08:59:21.254033 6875 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0314 08:59:21.254078 6875 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0314 08:59:21.254161 6875 factory.go:656] Stopping watch factory\\\\nI0314 08:59:21.254204 6875 ovnkube.go:599] Stopped ovnkube\\\\nI0314 08:59:21.254046 6875 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0314 08:59:21.254398 6875 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0314 08:59:21.254439 6875 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0314 08:59:21.254479 6875 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0314 08:59:21.254669 6875 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.374277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.384104 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.394968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.406317 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.422661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.433117 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.443898 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.458071 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.472773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.483968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.499630 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.512958 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:39Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:39 crc kubenswrapper[4869]: I0314 08:59:39.703084 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:39 crc kubenswrapper[4869]: E0314 08:59:39.703218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.307087 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/2.log" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.311727 4869 scope.go:117] "RemoveContainer" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" Mar 14 08:59:40 crc kubenswrapper[4869]: E0314 08:59:40.311890 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.344473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.358636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.374584 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.392020 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.409031 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.423560 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.445536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.464158 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.481321 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.500615 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.519007 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.535947 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.556563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.592070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.611911 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.633660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:40Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.703100 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.703212 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:40 crc kubenswrapper[4869]: I0314 08:59:40.703130 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:40 crc kubenswrapper[4869]: E0314 08:59:40.703347 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:40 crc kubenswrapper[4869]: E0314 08:59:40.703630 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:40 crc kubenswrapper[4869]: E0314 08:59:40.703798 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:41 crc kubenswrapper[4869]: I0314 08:59:41.703719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:41 crc kubenswrapper[4869]: E0314 08:59:41.703853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:41 crc kubenswrapper[4869]: I0314 08:59:41.715336 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.040348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.040412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.040432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.040461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.040478 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:42Z","lastTransitionTime":"2026-03-14T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.055565 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.060001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.060045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.060058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.060074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.060086 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:42Z","lastTransitionTime":"2026-03-14T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.074093 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.077986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.078021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.078030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.078044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.078054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:42Z","lastTransitionTime":"2026-03-14T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.095996 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.099902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.099949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.099962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.099982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.099994 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:42Z","lastTransitionTime":"2026-03-14T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.112345 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.116314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.116371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.116383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.116398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.116410 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:42Z","lastTransitionTime":"2026-03-14T08:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.127693 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:42Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.127875 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.703589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.703745 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:42 crc kubenswrapper[4869]: I0314 08:59:42.703830 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.703747 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.704061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.704251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:42 crc kubenswrapper[4869]: E0314 08:59:42.804271 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:43 crc kubenswrapper[4869]: I0314 08:59:43.703314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:43 crc kubenswrapper[4869]: E0314 08:59:43.703487 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:44 crc kubenswrapper[4869]: I0314 08:59:44.703496 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:44 crc kubenswrapper[4869]: I0314 08:59:44.703544 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:44 crc kubenswrapper[4869]: I0314 08:59:44.703785 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:44 crc kubenswrapper[4869]: E0314 08:59:44.704048 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:44 crc kubenswrapper[4869]: E0314 08:59:44.704218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:44 crc kubenswrapper[4869]: E0314 08:59:44.704409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:45 crc kubenswrapper[4869]: I0314 08:59:45.703756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:45 crc kubenswrapper[4869]: E0314 08:59:45.703904 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:46 crc kubenswrapper[4869]: I0314 08:59:46.702732 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:46 crc kubenswrapper[4869]: E0314 08:59:46.702869 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:46 crc kubenswrapper[4869]: I0314 08:59:46.703125 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:46 crc kubenswrapper[4869]: E0314 08:59:46.703205 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:46 crc kubenswrapper[4869]: I0314 08:59:46.703328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:46 crc kubenswrapper[4869]: E0314 08:59:46.703386 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.703040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:47 crc kubenswrapper[4869]: E0314 08:59:47.703231 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.736987 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.751015 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.765055 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.774742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.786881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.800186 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: E0314 08:59:47.804904 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.814435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.830559 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.842424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.855021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.875553 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.886292 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.902210 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.914431 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.926860 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.939525 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:47 crc kubenswrapper[4869]: I0314 08:59:47.952226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:47Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:48 crc kubenswrapper[4869]: I0314 08:59:48.703556 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:48 crc kubenswrapper[4869]: E0314 08:59:48.703957 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:48 crc kubenswrapper[4869]: I0314 08:59:48.703671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:48 crc kubenswrapper[4869]: E0314 08:59:48.704023 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:48 crc kubenswrapper[4869]: I0314 08:59:48.703656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:48 crc kubenswrapper[4869]: E0314 08:59:48.704080 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:49 crc kubenswrapper[4869]: I0314 08:59:49.704553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:49 crc kubenswrapper[4869]: E0314 08:59:49.704726 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:50 crc kubenswrapper[4869]: I0314 08:59:50.703467 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:50 crc kubenswrapper[4869]: I0314 08:59:50.703468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:50 crc kubenswrapper[4869]: I0314 08:59:50.703630 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:50 crc kubenswrapper[4869]: E0314 08:59:50.703777 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:50 crc kubenswrapper[4869]: E0314 08:59:50.703852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:50 crc kubenswrapper[4869]: E0314 08:59:50.703960 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:51 crc kubenswrapper[4869]: I0314 08:59:51.702827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:51 crc kubenswrapper[4869]: E0314 08:59:51.702994 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:51 crc kubenswrapper[4869]: I0314 08:59:51.703899 4869 scope.go:117] "RemoveContainer" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" Mar 14 08:59:51 crc kubenswrapper[4869]: E0314 08:59:51.704149 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.426465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.426553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.426575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.426599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.426616 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:52Z","lastTransitionTime":"2026-03-14T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.450205 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:52Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.456243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.456301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.456322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.456348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.456366 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:52Z","lastTransitionTime":"2026-03-14T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.476016 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:52Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.480737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.480775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.480786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.480802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.480813 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:52Z","lastTransitionTime":"2026-03-14T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.500900 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:52Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.505124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.505164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.505175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.505189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.505199 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:52Z","lastTransitionTime":"2026-03-14T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.524114 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:52Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.530331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.530404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.530447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.530477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.530499 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T08:59:52Z","lastTransitionTime":"2026-03-14T08:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.551330 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:52Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.551783 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.703228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.704118 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.703420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.704321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:52 crc kubenswrapper[4869]: I0314 08:59:52.703323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.704538 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:52 crc kubenswrapper[4869]: E0314 08:59:52.806802 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:53 crc kubenswrapper[4869]: I0314 08:59:53.703577 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:53 crc kubenswrapper[4869]: E0314 08:59:53.703744 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:53 crc kubenswrapper[4869]: I0314 08:59:53.987676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:53 crc kubenswrapper[4869]: E0314 08:59:53.987850 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:53 crc kubenswrapper[4869]: E0314 08:59:53.987947 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 09:00:25.987917947 +0000 UTC m=+178.960200040 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 08:59:54 crc kubenswrapper[4869]: I0314 08:59:54.703716 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:54 crc kubenswrapper[4869]: I0314 08:59:54.703719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:54 crc kubenswrapper[4869]: E0314 08:59:54.703908 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:54 crc kubenswrapper[4869]: I0314 08:59:54.703719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:54 crc kubenswrapper[4869]: E0314 08:59:54.703965 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:54 crc kubenswrapper[4869]: E0314 08:59:54.704027 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:55 crc kubenswrapper[4869]: I0314 08:59:55.703586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:55 crc kubenswrapper[4869]: E0314 08:59:55.703717 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:55 crc kubenswrapper[4869]: I0314 08:59:55.719249 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 14 08:59:56 crc kubenswrapper[4869]: I0314 08:59:56.703718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:56 crc kubenswrapper[4869]: I0314 08:59:56.703764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:56 crc kubenswrapper[4869]: I0314 08:59:56.703747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:56 crc kubenswrapper[4869]: E0314 08:59:56.703896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:56 crc kubenswrapper[4869]: E0314 08:59:56.704045 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:56 crc kubenswrapper[4869]: E0314 08:59:56.704144 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.369157 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/0.log" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.369865 4869 generic.go:334] "Generic (PLEG): container finished" podID="3aedc0f3-51fe-492b-9337-02b2b6e38327" containerID="8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1" exitCode=1 Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.369928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerDied","Data":"8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1"} Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.370481 4869 scope.go:117] "RemoveContainer" containerID="8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.386363 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.403688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.416319 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.430635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.444852 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.462928 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.476184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.501576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.512159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.523631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.536144 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.551801 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.563757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.576860 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.586767 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.603257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.614525 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.628632 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.703188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:57 crc kubenswrapper[4869]: E0314 08:59:57.703331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.723840 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.736897 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.748968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.773421 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.795102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: E0314 08:59:57.807150 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.814180 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.827573 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.836618 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.846894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.856860 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.867549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.878645 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.888544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.898550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.910426 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.926557 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.935435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:57 crc kubenswrapper[4869]: I0314 08:59:57.944477 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:57Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.375163 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/0.log" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.375225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerStarted","Data":"10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1"} Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.389422 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.401814 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.414432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.426344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.440521 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.463931 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.476779 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.488951 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.502832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.515721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.529823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.551478 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.567212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.583347 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.597615 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.614349 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.625754 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.639762 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T08:59:58Z is after 2025-08-24T17:21:41Z" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.702778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.702821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 08:59:58 crc kubenswrapper[4869]: I0314 08:59:58.702873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 08:59:58 crc kubenswrapper[4869]: E0314 08:59:58.702918 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 08:59:58 crc kubenswrapper[4869]: E0314 08:59:58.703001 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 08:59:58 crc kubenswrapper[4869]: E0314 08:59:58.703106 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 08:59:59 crc kubenswrapper[4869]: I0314 08:59:59.703382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 08:59:59 crc kubenswrapper[4869]: E0314 08:59:59.703596 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:00 crc kubenswrapper[4869]: I0314 09:00:00.702965 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:00 crc kubenswrapper[4869]: I0314 09:00:00.702990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:00 crc kubenswrapper[4869]: I0314 09:00:00.702990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:00 crc kubenswrapper[4869]: E0314 09:00:00.703251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:00 crc kubenswrapper[4869]: E0314 09:00:00.703314 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:00 crc kubenswrapper[4869]: E0314 09:00:00.703116 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:01 crc kubenswrapper[4869]: I0314 09:00:01.703044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:01 crc kubenswrapper[4869]: E0314 09:00:01.703550 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.703398 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.703445 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.703539 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.703606 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.703688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.704061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.808374 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.928082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.928117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.928126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.928139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.928161 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:02Z","lastTransitionTime":"2026-03-14T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.940773 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:02Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.945087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.945151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.945164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.945178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.945190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:02Z","lastTransitionTime":"2026-03-14T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.962706 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:02Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.966370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.966408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.966419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.966435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.966446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:02Z","lastTransitionTime":"2026-03-14T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.977599 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:02Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.980888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.980913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.980922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.980937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.980946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:02Z","lastTransitionTime":"2026-03-14T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:02 crc kubenswrapper[4869]: E0314 09:00:02.991885 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:02Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.995397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.995437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.995449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.995466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:02 crc kubenswrapper[4869]: I0314 09:00:02.995477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:02Z","lastTransitionTime":"2026-03-14T09:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:03 crc kubenswrapper[4869]: E0314 09:00:03.008761 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:03Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:03 crc kubenswrapper[4869]: E0314 09:00:03.008874 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 09:00:03 crc kubenswrapper[4869]: I0314 09:00:03.702941 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:03 crc kubenswrapper[4869]: E0314 09:00:03.703649 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:03 crc kubenswrapper[4869]: I0314 09:00:03.704268 4869 scope.go:117] "RemoveContainer" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.394616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/2.log" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.398689 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.399248 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.412976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.425947 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.438475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.454944 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.470320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.493785 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T09:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.505649 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.522750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.544568 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.561970 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.571983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.572120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.572154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.572201 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.572241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572274 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572369 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572387 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572386 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.572324357 +0000 UTC m=+221.544606520 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572399 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572408 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572640 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572630 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572686 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572466 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.57244194 +0000 UTC m=+221.544724203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572803 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.572780138 +0000 UTC m=+221.545062331 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572829 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.572819159 +0000 UTC m=+221.545101472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.572848 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.572838959 +0000 UTC m=+221.545121242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.578086 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.602111 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.616288 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.629157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.648957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.664315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.685085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.698484 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.703842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.703854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:04 crc kubenswrapper[4869]: I0314 09:00:04.704034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.704143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.704324 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:04 crc kubenswrapper[4869]: E0314 09:00:04.704494 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.404881 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/3.log" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.405656 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/2.log" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.408886 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" exitCode=1 Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.408922 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.408951 4869 scope.go:117] "RemoveContainer" containerID="7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.409500 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:00:05 crc kubenswrapper[4869]: E0314 09:00:05.409664 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.425291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.441357 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.457975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.471085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.484234 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.493625 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.504763 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.517024 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.537422 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b9cd78eb109908cd16edf342c49dff8f508c168891dc7af2531d83cadae56f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:38Z\\\",\\\"message\\\":\\\"lse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-controller-manager/controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0314 08:59:38.444210 7124 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nI0314 08:59:38.444220 7124 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-n77vq\\\\nF0314 08:59:38.444197 7124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initia\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"s:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0314 09:00:04.582905 7453 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z]\\\\nI0314 09:00:04.582852 7453 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manage\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T09:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.553389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.576491 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.588940 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.601566 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.613763 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.626878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.637860 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.657575 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.671662 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:05Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:05 crc kubenswrapper[4869]: I0314 09:00:05.703161 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:05 crc kubenswrapper[4869]: E0314 09:00:05.703331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.413088 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/3.log" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.416634 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:00:06 crc kubenswrapper[4869]: E0314 09:00:06.416804 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.435834 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.449308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.463567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.477697 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.504954 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"s:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0314 09:00:04.582905 7453 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z]\\\\nI0314 09:00:04.582852 7453 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manage\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T09:00:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.516244 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.524782 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.536100 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.546308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.557197 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.573309 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.584435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.595816 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.607903 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.621279 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.629636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.640856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.651974 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:06Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.702899 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.702950 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:06 crc kubenswrapper[4869]: I0314 09:00:06.702910 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:06 crc kubenswrapper[4869]: E0314 09:00:06.703041 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:06 crc kubenswrapper[4869]: E0314 09:00:06.703104 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:06 crc kubenswrapper[4869]: E0314 09:00:06.703170 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.703623 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:07 crc kubenswrapper[4869]: E0314 09:00:07.703982 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.718665 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.730955 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.743911 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.754532 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.769176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.785359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"s:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0314 09:00:04.582905 7453 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z]\\\\nI0314 09:00:04.582852 7453 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manage\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T09:00:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.795607 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.807845 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: E0314 09:00:07.808911 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.819887 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.831228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.842472 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.862115 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.876179 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.891724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.907158 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.921359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.932088 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:07 crc kubenswrapper[4869]: I0314 09:00:07.946187 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:07Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:08 crc kubenswrapper[4869]: I0314 09:00:08.703617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:08 crc kubenswrapper[4869]: I0314 09:00:08.703749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:08 crc kubenswrapper[4869]: E0314 09:00:08.703865 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:08 crc kubenswrapper[4869]: E0314 09:00:08.703991 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:08 crc kubenswrapper[4869]: I0314 09:00:08.703627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:08 crc kubenswrapper[4869]: E0314 09:00:08.704114 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:09 crc kubenswrapper[4869]: I0314 09:00:09.703877 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:09 crc kubenswrapper[4869]: E0314 09:00:09.704148 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:10 crc kubenswrapper[4869]: I0314 09:00:10.702930 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:10 crc kubenswrapper[4869]: I0314 09:00:10.703022 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:10 crc kubenswrapper[4869]: I0314 09:00:10.703215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:10 crc kubenswrapper[4869]: E0314 09:00:10.703413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:10 crc kubenswrapper[4869]: E0314 09:00:10.703443 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:10 crc kubenswrapper[4869]: E0314 09:00:10.703532 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:10 crc kubenswrapper[4869]: I0314 09:00:10.718036 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 14 09:00:11 crc kubenswrapper[4869]: I0314 09:00:11.703998 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:11 crc kubenswrapper[4869]: E0314 09:00:11.704324 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:12 crc kubenswrapper[4869]: I0314 09:00:12.703594 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:12 crc kubenswrapper[4869]: E0314 09:00:12.704148 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:12 crc kubenswrapper[4869]: I0314 09:00:12.703873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:12 crc kubenswrapper[4869]: I0314 09:00:12.703712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:12 crc kubenswrapper[4869]: E0314 09:00:12.704230 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:12 crc kubenswrapper[4869]: E0314 09:00:12.704433 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:12 crc kubenswrapper[4869]: E0314 09:00:12.811528 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.211867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.211938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.211958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.212024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.212044 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:13Z","lastTransitionTime":"2026-03-14T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.225851 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:13Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.231682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.231764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.231788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.231815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.231833 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:13Z","lastTransitionTime":"2026-03-14T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.249061 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:13Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.252919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.253054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.253084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.253145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.253170 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:13Z","lastTransitionTime":"2026-03-14T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.268571 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:13Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.272704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.272754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.272766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.272789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.272810 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:13Z","lastTransitionTime":"2026-03-14T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.287957 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:13Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.293075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.293139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.293152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.293193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.293208 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:13Z","lastTransitionTime":"2026-03-14T09:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.311249 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-14T09:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e8736076-5c62-4abb-8b49-b2af716eaec4\\\",\\\"systemUUID\\\":\\\"b9f13929-24fa-42f7-b237-4766a535e935\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:13Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.311378 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 14 09:00:13 crc kubenswrapper[4869]: I0314 09:00:13.703144 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:13 crc kubenswrapper[4869]: E0314 09:00:13.703414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:14 crc kubenswrapper[4869]: I0314 09:00:14.703427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:14 crc kubenswrapper[4869]: I0314 09:00:14.703559 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:14 crc kubenswrapper[4869]: I0314 09:00:14.703714 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:14 crc kubenswrapper[4869]: E0314 09:00:14.703730 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:14 crc kubenswrapper[4869]: E0314 09:00:14.703955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:14 crc kubenswrapper[4869]: E0314 09:00:14.704056 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:15 crc kubenswrapper[4869]: I0314 09:00:15.703460 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:15 crc kubenswrapper[4869]: E0314 09:00:15.703763 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:16 crc kubenswrapper[4869]: I0314 09:00:16.703054 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:16 crc kubenswrapper[4869]: I0314 09:00:16.703144 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:16 crc kubenswrapper[4869]: I0314 09:00:16.703073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:16 crc kubenswrapper[4869]: E0314 09:00:16.703356 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:16 crc kubenswrapper[4869]: E0314 09:00:16.703790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:16 crc kubenswrapper[4869]: E0314 09:00:16.704142 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.703604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:17 crc kubenswrapper[4869]: E0314 09:00:17.703819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.720653 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e947491-f796-4ab0-b22e-7e1f3f0392c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://014f061ecc7ee161ccdb359979ebfc94988087f4b57840237577e9ba12af7480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85789e5a09eaec1b036d2d77c9b912f1759b2fad9cdabc3c085171226cc203a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e81cffa06a67998434572e483c47a77a0056873df8bf4c22c16014a11ecf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d874c970ab2a3a7c5581db900df3c98e3c63e7a8a7df5d78818b053ca44794e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.735263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9nncq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3aedc0f3-51fe-492b-9337-02b2b6e38327\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T08:59:56Z\\\",\\\"message\\\":\\\"2026-03-14T08:59:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d\\\\n2026-03-14T08:59:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7571d3d4-0323-409a-87f2-76dbef92559d to /host/opt/cni/bin/\\\\n2026-03-14T08:59:11Z [verbose] multus-daemon started\\\\n2026-03-14T08:59:11Z [verbose] Readiness Indicator file check\\\\n2026-03-14T08:59:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzxc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9nncq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.763431 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"489ada67-a888-460e-862c-cd59acc0c6fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-14T09:00:04Z\\\",\\\"message\\\":\\\"s:{GoSet:[]} vips:{GoMap:map[10.217.5.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7f9b8f25-db1a-4d02-a423-9afc5c2fb83c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0314 09:00:04.582905 7453 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:04Z is after 2025-08-24T17:21:41Z]\\\\nI0314 09:00:04.582852 7453 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manage\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T09:00:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2tv2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bhcmd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.777007 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fr765" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba53a2e-898a-4ea2-b6c2-c9624c757416\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a583fe1372e1c04767919cc8833d3be9db87d4f35ccd546eb532e5c7a75f8cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gbvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fr765\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.791020 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad1fc195-cf42-4f13-aabc-1611622185c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c20cb39397f3ad7196a978a8f681448e78f7215adc7a92c001c502491b3df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b213a554b4d3faa613a12af55ec06664bfecc3a14bc4089d97ba6067c8e245c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.805473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: E0314 09:00:17.812226 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.819766 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.834690 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9csf6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec9b7f2a-6b0e-434c-8fc5-7736b07d8c1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1158ecbbf0cdd936871bbc56f61f79c91445dd786768980f9a10b219530cb0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vqrxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9csf6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.848330 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e08d1ace-1d27-4a7d-b08e-c245a103c56f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f341c3aa127c9ed1a488e36c1e5607b190bf5f6ae54b5c0904a9d5da56c9dd33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bgflb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jj985\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.861386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaac38bd-f9ff-4eee-89ab-2b2ef4cf57ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe73f85d211f7ce56265f55905136674d646ee15e1e871b07d9bfd6e1cc25cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a1af2dab32745f1963541bb4a6e0309d61b439d89ba7050361714cbd42069a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-84zwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-s2v62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.883328 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b9ef211-aecf-4300-bc14-3fdf1fc59323\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://445693c28df3d7fd3572d0d5cbc9be07b5f5a0fe81afa6b2d01d968207d7a8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784e47de13a17dbe8b92b08e1d68cce897b7795473588217935e77eb4bd305e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b83becb1029d691e6eeaea5b2695f2a02b2b9e4fe19ab7bc96fc09df9e4d94ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a41671fbc8c363194092d696686190a3c425bbd4b3574d27f083d45d9487c386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://552e06e9f7eb509c06448910a580f987e7aa3079a60247ed9cf3fdfb529c5c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2f35d728d646197be09dc6cc61130c53d2d186bac573d241209b900a1bb5e61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bad112e0d4c4f1143b6a292f4a7c10e7ac6c546a436c2cfa3d04d291290c62c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b406cc2cbb7a7043327dc39a5bf10daf2eb6f87e96816aad1a06a0aedc76026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.898686 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.911927 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3937cbf905f450136e08d4b4f86a8abbf1d19d05c976477dba0d95ea8e6f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.922994 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n77vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5b025a-d78e-4728-b492-19846b3ad862\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t49vv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n77vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.937606 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ed63d38-6eaf-4a6b-90f2-33571f319b1b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:20Z\\\",\\\"message\\\":\\\"W0314 08:58:19.939020 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0314 08:58:19.939360 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773478699 cert, and key in /tmp/serving-cert-1550864346/serving-signer.crt, /tmp/serving-cert-1550864346/serving-signer.key\\\\nI0314 08:58:20.218016 1 observer_polling.go:159] Starting file observer\\\\nW0314 08:58:20.226657 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0314 08:58:20.226800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0314 08:58:20.227470 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1550864346/tls.crt::/tmp/serving-cert-1550864346/tls.key\\\\\\\"\\\\nF0314 08:58:20.762612 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:58:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.952279 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f702c09f-63ab-4a71-91f2-c752180e3272\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:58:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b209c4c7b25dd6954b494380c36af2ec214e81fc357c40b8ad43d178533a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54305702853ee9b84f5c9e0873af79ee05cd4393d41fc890f0fb67d393fa048c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-14T08:58:29Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0314 08:57:59.864958 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0314 08:57:59.865846 1 observer_polling.go:159] Starting file observer\\\\nI0314 08:57:59.866748 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0314 08:57:59.867419 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0314 08:58:22.204502 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0314 08:58:29.426501 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0314 08:58:29.426681 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-14T08:57:59Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b917cd2840a112af49cdeaf81d121bb8e5e6835cb19d3598b412c890a15c1d49\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4ee4f1b46d2eab88614dc02cbc4239ff5317feb3132a80acd0c4c8132388d14\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:57:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.965594 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d21fe39d8b65099bc36393ea8f6f2819c0bafffab37eb7a19dc58c12f365461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.980452 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06383a2fe7cb5b85e61f6d3e0384abd212f111216c84042dbe47be0950b50d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88aa32da5045653ba76ff03c3304968b4c52455607494f98e6bbc7f4027d53d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:17 crc kubenswrapper[4869]: I0314 09:00:17.996536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f2679ec-a6bd-483b-b5b5-4615e83942a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-14T08:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2856fa74dd522dd605c7538f4cc5b33ce4464e0d1f96538a8319ec5f6a721462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-14T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c4b5f3a5062ecbb441a37746e1e6c4cdef89d7f85da3e8183de51eabbecbba5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea9e54921b27ba250af30f786856135f541efebd881810b1369526db066a9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3079eda056409570305b614d6e9fceb1d662603a670c6377d8621a746480e536\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b21df21139e621487a256e7766f47a4b01a427bd8167e094d90aeb929c96060\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eabeccab34c68ebfde4686b97d95c25269b6c4eb31004338ae8374392d825f1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75ce8e592c86fa953c28e266c39aa760a333ed8f1c3e79c86dbc69ee42666097\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-14T08:59:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-14T08:59:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrwh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-14T08:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lfk4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-14T09:00:17Z is after 2025-08-24T17:21:41Z" Mar 14 09:00:18 crc kubenswrapper[4869]: I0314 09:00:18.703165 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:18 crc kubenswrapper[4869]: I0314 09:00:18.703214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:18 crc kubenswrapper[4869]: I0314 09:00:18.703169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:18 crc kubenswrapper[4869]: E0314 09:00:18.703290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:18 crc kubenswrapper[4869]: E0314 09:00:18.703408 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:18 crc kubenswrapper[4869]: E0314 09:00:18.703588 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:19 crc kubenswrapper[4869]: I0314 09:00:19.703331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:19 crc kubenswrapper[4869]: E0314 09:00:19.703537 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:20 crc kubenswrapper[4869]: I0314 09:00:20.703555 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:20 crc kubenswrapper[4869]: E0314 09:00:20.703770 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:20 crc kubenswrapper[4869]: I0314 09:00:20.704103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:20 crc kubenswrapper[4869]: E0314 09:00:20.704170 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:20 crc kubenswrapper[4869]: I0314 09:00:20.704313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:20 crc kubenswrapper[4869]: E0314 09:00:20.704426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:20 crc kubenswrapper[4869]: I0314 09:00:20.705473 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:00:20 crc kubenswrapper[4869]: E0314 09:00:20.705725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 09:00:21 crc kubenswrapper[4869]: I0314 09:00:21.703589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:21 crc kubenswrapper[4869]: E0314 09:00:21.703714 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:22 crc kubenswrapper[4869]: I0314 09:00:22.703190 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:22 crc kubenswrapper[4869]: E0314 09:00:22.703331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:22 crc kubenswrapper[4869]: I0314 09:00:22.703388 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:22 crc kubenswrapper[4869]: I0314 09:00:22.703412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:22 crc kubenswrapper[4869]: E0314 09:00:22.703585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:22 crc kubenswrapper[4869]: E0314 09:00:22.703749 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:22 crc kubenswrapper[4869]: E0314 09:00:22.813479 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.658662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.659030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.659076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.659096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.659108 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T09:00:23Z","lastTransitionTime":"2026-03-14T09:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.702868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:23 crc kubenswrapper[4869]: E0314 09:00:23.703003 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.707293 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh"] Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.707726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.709559 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.709590 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.710373 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.710529 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.747994 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.748144 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lfk4t" podStartSLOduration=112.748113202 podStartE2EDuration="1m52.748113202s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.747874057 +0000 UTC m=+176.720156110" watchObservedRunningTime="2026-03-14 09:00:23.748113202 +0000 UTC m=+176.720395275" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.758799 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.779908 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.779888228 podStartE2EDuration="1m17.779888228s" podCreationTimestamp="2026-03-14 08:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.779617362 +0000 UTC m=+176.751899425" watchObservedRunningTime="2026-03-14 09:00:23.779888228 +0000 UTC m=+176.752170281" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.803403 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.803378889 podStartE2EDuration="13.803378889s" podCreationTimestamp="2026-03-14 09:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.802262281 +0000 UTC m=+176.774544344" watchObservedRunningTime="2026-03-14 09:00:23.803378889 +0000 UTC m=+176.775660962" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.807469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.807595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.807646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.807707 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.807734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.851802 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.851783485 podStartE2EDuration="42.851783485s" podCreationTimestamp="2026-03-14 08:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.838081517 +0000 UTC m=+176.810363580" watchObservedRunningTime="2026-03-14 09:00:23.851783485 +0000 UTC m=+176.824065538" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.865197 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9csf6" podStartSLOduration=112.865181737 podStartE2EDuration="1m52.865181737s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.864472039 +0000 UTC m=+176.836754092" watchObservedRunningTime="2026-03-14 09:00:23.865181737 +0000 UTC m=+176.837463790" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.877986 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podStartSLOduration=112.877963613 podStartE2EDuration="1m52.877963613s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.876342753 +0000 UTC m=+176.848624806" watchObservedRunningTime="2026-03-14 09:00:23.877963613 +0000 UTC m=+176.850245666" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.890759 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9nncq" podStartSLOduration=112.890730118 podStartE2EDuration="1m52.890730118s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.890698718 +0000 UTC m=+176.862980781" watchObservedRunningTime="2026-03-14 09:00:23.890730118 +0000 UTC m=+176.863012181" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909155 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909659 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.909803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.910387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.922937 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.932285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79f6b917-2bb5-45a1-a3c7-bb21b4e01503-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jmjkh\" (UID: \"79f6b917-2bb5-45a1-a3c7-bb21b4e01503\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.946952 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fr765" podStartSLOduration=112.946930077 podStartE2EDuration="1m52.946930077s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.934406098 +0000 UTC m=+176.906688151" watchObservedRunningTime="2026-03-14 09:00:23.946930077 +0000 UTC m=+176.919212130" Mar 14 09:00:23 crc kubenswrapper[4869]: I0314 09:00:23.960307 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.960290208 podStartE2EDuration="28.960290208s" podCreationTimestamp="2026-03-14 08:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:23.947466531 +0000 UTC m=+176.919748604" watchObservedRunningTime="2026-03-14 09:00:23.960290208 +0000 UTC m=+176.932572261" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.025323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.052479 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-s2v62" podStartSLOduration=112.052449777 podStartE2EDuration="1m52.052449777s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:24.002357148 +0000 UTC m=+176.974639191" watchObservedRunningTime="2026-03-14 09:00:24.052449777 +0000 UTC m=+177.024731830" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.069907 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=63.069878097 podStartE2EDuration="1m3.069878097s" podCreationTimestamp="2026-03-14 08:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:24.052663132 +0000 UTC m=+177.024945185" watchObservedRunningTime="2026-03-14 09:00:24.069878097 +0000 UTC m=+177.042160150" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.478448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" event={"ID":"79f6b917-2bb5-45a1-a3c7-bb21b4e01503","Type":"ContainerStarted","Data":"c5bcff1076906764db701e36894ea2144bb9cbd34c4e22dcf97e5df58f6507da"} Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.478562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" event={"ID":"79f6b917-2bb5-45a1-a3c7-bb21b4e01503","Type":"ContainerStarted","Data":"abff7cf9e9e8abb872200a8501a17020c40b71c16b015bca2bb640a9b0064e65"} Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.497041 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jmjkh" podStartSLOduration=113.497012386 podStartE2EDuration="1m53.497012386s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:24.496931734 +0000 UTC m=+177.469213817" watchObservedRunningTime="2026-03-14 09:00:24.497012386 +0000 UTC m=+177.469294439" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.703120 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.703165 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:24 crc kubenswrapper[4869]: I0314 09:00:24.703192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:24 crc kubenswrapper[4869]: E0314 09:00:24.703266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:24 crc kubenswrapper[4869]: E0314 09:00:24.703553 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:24 crc kubenswrapper[4869]: E0314 09:00:24.703822 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:25 crc kubenswrapper[4869]: I0314 09:00:25.702863 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:25 crc kubenswrapper[4869]: E0314 09:00:25.703048 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:26 crc kubenswrapper[4869]: I0314 09:00:26.033969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:26 crc kubenswrapper[4869]: E0314 09:00:26.034116 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 09:00:26 crc kubenswrapper[4869]: E0314 09:00:26.034169 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs podName:0b5b025a-d78e-4728-b492-19846b3ad862 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:30.034154886 +0000 UTC m=+243.006436929 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs") pod "network-metrics-daemon-n77vq" (UID: "0b5b025a-d78e-4728-b492-19846b3ad862") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 14 09:00:26 crc kubenswrapper[4869]: I0314 09:00:26.703675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:26 crc kubenswrapper[4869]: I0314 09:00:26.703725 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:26 crc kubenswrapper[4869]: I0314 09:00:26.703701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:26 crc kubenswrapper[4869]: E0314 09:00:26.703803 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:26 crc kubenswrapper[4869]: E0314 09:00:26.703897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:26 crc kubenswrapper[4869]: E0314 09:00:26.704097 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:27 crc kubenswrapper[4869]: I0314 09:00:27.703778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:27 crc kubenswrapper[4869]: E0314 09:00:27.704950 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:27 crc kubenswrapper[4869]: E0314 09:00:27.814110 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:28 crc kubenswrapper[4869]: I0314 09:00:28.703390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:28 crc kubenswrapper[4869]: I0314 09:00:28.703494 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:28 crc kubenswrapper[4869]: E0314 09:00:28.703554 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:28 crc kubenswrapper[4869]: I0314 09:00:28.703642 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:28 crc kubenswrapper[4869]: E0314 09:00:28.703673 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:28 crc kubenswrapper[4869]: E0314 09:00:28.703839 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:29 crc kubenswrapper[4869]: I0314 09:00:29.702845 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:29 crc kubenswrapper[4869]: E0314 09:00:29.703403 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:30 crc kubenswrapper[4869]: I0314 09:00:30.703764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:30 crc kubenswrapper[4869]: I0314 09:00:30.703803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:30 crc kubenswrapper[4869]: E0314 09:00:30.703878 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:30 crc kubenswrapper[4869]: I0314 09:00:30.703764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:30 crc kubenswrapper[4869]: E0314 09:00:30.704033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:30 crc kubenswrapper[4869]: E0314 09:00:30.704167 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:31 crc kubenswrapper[4869]: I0314 09:00:31.703876 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:31 crc kubenswrapper[4869]: E0314 09:00:31.704122 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:31 crc kubenswrapper[4869]: I0314 09:00:31.706062 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:00:31 crc kubenswrapper[4869]: E0314 09:00:31.706411 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bhcmd_openshift-ovn-kubernetes(489ada67-a888-460e-862c-cd59acc0c6fe)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" Mar 14 09:00:32 crc kubenswrapper[4869]: I0314 09:00:32.703144 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:32 crc kubenswrapper[4869]: I0314 09:00:32.703220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:32 crc kubenswrapper[4869]: E0314 09:00:32.703328 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:32 crc kubenswrapper[4869]: E0314 09:00:32.703490 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:32 crc kubenswrapper[4869]: I0314 09:00:32.703741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:32 crc kubenswrapper[4869]: E0314 09:00:32.703923 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:32 crc kubenswrapper[4869]: E0314 09:00:32.815246 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:33 crc kubenswrapper[4869]: I0314 09:00:33.702827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:33 crc kubenswrapper[4869]: E0314 09:00:33.703072 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:34 crc kubenswrapper[4869]: I0314 09:00:34.703714 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:34 crc kubenswrapper[4869]: I0314 09:00:34.703749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:34 crc kubenswrapper[4869]: I0314 09:00:34.703741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:34 crc kubenswrapper[4869]: E0314 09:00:34.703867 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:34 crc kubenswrapper[4869]: E0314 09:00:34.703985 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:34 crc kubenswrapper[4869]: E0314 09:00:34.704056 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:35 crc kubenswrapper[4869]: I0314 09:00:35.702894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:35 crc kubenswrapper[4869]: E0314 09:00:35.703283 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:36 crc kubenswrapper[4869]: I0314 09:00:36.703034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:36 crc kubenswrapper[4869]: I0314 09:00:36.703086 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:36 crc kubenswrapper[4869]: I0314 09:00:36.703108 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:36 crc kubenswrapper[4869]: E0314 09:00:36.703169 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:36 crc kubenswrapper[4869]: E0314 09:00:36.703218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:36 crc kubenswrapper[4869]: E0314 09:00:36.703278 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:37 crc kubenswrapper[4869]: I0314 09:00:37.703123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:37 crc kubenswrapper[4869]: E0314 09:00:37.704468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:37 crc kubenswrapper[4869]: E0314 09:00:37.816344 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:38 crc kubenswrapper[4869]: I0314 09:00:38.702893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:38 crc kubenswrapper[4869]: I0314 09:00:38.702967 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:38 crc kubenswrapper[4869]: I0314 09:00:38.702910 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:38 crc kubenswrapper[4869]: E0314 09:00:38.703030 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:38 crc kubenswrapper[4869]: E0314 09:00:38.703277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:38 crc kubenswrapper[4869]: E0314 09:00:38.703559 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:39 crc kubenswrapper[4869]: I0314 09:00:39.703789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:39 crc kubenswrapper[4869]: E0314 09:00:39.704047 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:40 crc kubenswrapper[4869]: I0314 09:00:40.702819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:40 crc kubenswrapper[4869]: I0314 09:00:40.702839 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:40 crc kubenswrapper[4869]: I0314 09:00:40.702841 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:40 crc kubenswrapper[4869]: E0314 09:00:40.703034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:40 crc kubenswrapper[4869]: E0314 09:00:40.703155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:40 crc kubenswrapper[4869]: E0314 09:00:40.703322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:41 crc kubenswrapper[4869]: I0314 09:00:41.703824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:41 crc kubenswrapper[4869]: E0314 09:00:41.703969 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:42 crc kubenswrapper[4869]: I0314 09:00:42.703484 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:42 crc kubenswrapper[4869]: E0314 09:00:42.703651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:42 crc kubenswrapper[4869]: I0314 09:00:42.703832 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:42 crc kubenswrapper[4869]: E0314 09:00:42.704140 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:42 crc kubenswrapper[4869]: I0314 09:00:42.704803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:42 crc kubenswrapper[4869]: E0314 09:00:42.705077 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:42 crc kubenswrapper[4869]: E0314 09:00:42.818231 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.539663 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/1.log" Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.540207 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/0.log" Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.540242 4869 generic.go:334] "Generic (PLEG): container finished" podID="3aedc0f3-51fe-492b-9337-02b2b6e38327" containerID="10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1" exitCode=1 Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.540270 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerDied","Data":"10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1"} Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.540299 4869 scope.go:117] "RemoveContainer" containerID="8d944743bda8469f5aec7d4dc383fc6868c6c5eed6155d1659ef4df43e2787d1" Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.540879 4869 scope.go:117] "RemoveContainer" containerID="10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1" Mar 14 09:00:43 crc kubenswrapper[4869]: E0314 09:00:43.541080 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9nncq_openshift-multus(3aedc0f3-51fe-492b-9337-02b2b6e38327)\"" pod="openshift-multus/multus-9nncq" podUID="3aedc0f3-51fe-492b-9337-02b2b6e38327" Mar 14 09:00:43 crc kubenswrapper[4869]: I0314 09:00:43.703262 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:43 crc kubenswrapper[4869]: E0314 09:00:43.703414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:44 crc kubenswrapper[4869]: I0314 09:00:44.544090 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/1.log" Mar 14 09:00:44 crc kubenswrapper[4869]: I0314 09:00:44.703122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:44 crc kubenswrapper[4869]: I0314 09:00:44.703193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:44 crc kubenswrapper[4869]: I0314 09:00:44.703215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:44 crc kubenswrapper[4869]: E0314 09:00:44.703301 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:44 crc kubenswrapper[4869]: E0314 09:00:44.703665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:44 crc kubenswrapper[4869]: E0314 09:00:44.705245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:45 crc kubenswrapper[4869]: I0314 09:00:45.703047 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:45 crc kubenswrapper[4869]: E0314 09:00:45.703200 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:45 crc kubenswrapper[4869]: I0314 09:00:45.704039 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.554610 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/3.log" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.559753 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerStarted","Data":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.560345 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.590695 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podStartSLOduration=135.590674975 podStartE2EDuration="2m15.590674975s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:00:46.589073985 +0000 UTC m=+199.561356048" watchObservedRunningTime="2026-03-14 09:00:46.590674975 +0000 UTC m=+199.562957028" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.703253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.703286 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.703211 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:46 crc kubenswrapper[4869]: E0314 09:00:46.703407 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:46 crc kubenswrapper[4869]: E0314 09:00:46.703521 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:46 crc kubenswrapper[4869]: E0314 09:00:46.703618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.715051 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n77vq"] Mar 14 09:00:46 crc kubenswrapper[4869]: I0314 09:00:46.715275 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:46 crc kubenswrapper[4869]: E0314 09:00:46.715420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:47 crc kubenswrapper[4869]: E0314 09:00:47.819146 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:48 crc kubenswrapper[4869]: I0314 09:00:48.703317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:48 crc kubenswrapper[4869]: I0314 09:00:48.703389 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:48 crc kubenswrapper[4869]: I0314 09:00:48.703423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:48 crc kubenswrapper[4869]: E0314 09:00:48.703468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:48 crc kubenswrapper[4869]: E0314 09:00:48.703613 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:48 crc kubenswrapper[4869]: I0314 09:00:48.703767 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:48 crc kubenswrapper[4869]: E0314 09:00:48.704020 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:48 crc kubenswrapper[4869]: E0314 09:00:48.704113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:50 crc kubenswrapper[4869]: I0314 09:00:50.703655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:50 crc kubenswrapper[4869]: E0314 09:00:50.704600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:50 crc kubenswrapper[4869]: I0314 09:00:50.703803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:50 crc kubenswrapper[4869]: E0314 09:00:50.704790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:50 crc kubenswrapper[4869]: I0314 09:00:50.704339 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:50 crc kubenswrapper[4869]: E0314 09:00:50.704868 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:50 crc kubenswrapper[4869]: I0314 09:00:50.703762 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:50 crc kubenswrapper[4869]: E0314 09:00:50.704927 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:52 crc kubenswrapper[4869]: I0314 09:00:52.703770 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:52 crc kubenswrapper[4869]: I0314 09:00:52.703769 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:52 crc kubenswrapper[4869]: I0314 09:00:52.703803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:52 crc kubenswrapper[4869]: I0314 09:00:52.703849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:52 crc kubenswrapper[4869]: E0314 09:00:52.704555 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:52 crc kubenswrapper[4869]: E0314 09:00:52.704587 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:52 crc kubenswrapper[4869]: E0314 09:00:52.704674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:52 crc kubenswrapper[4869]: E0314 09:00:52.704904 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:52 crc kubenswrapper[4869]: E0314 09:00:52.820561 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:54 crc kubenswrapper[4869]: I0314 09:00:54.703194 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:54 crc kubenswrapper[4869]: I0314 09:00:54.703374 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:54 crc kubenswrapper[4869]: E0314 09:00:54.703430 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:54 crc kubenswrapper[4869]: E0314 09:00:54.703654 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:54 crc kubenswrapper[4869]: I0314 09:00:54.703238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:54 crc kubenswrapper[4869]: E0314 09:00:54.703793 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:54 crc kubenswrapper[4869]: I0314 09:00:54.703193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:54 crc kubenswrapper[4869]: E0314 09:00:54.703890 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:56 crc kubenswrapper[4869]: I0314 09:00:56.703572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:56 crc kubenswrapper[4869]: I0314 09:00:56.703572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:56 crc kubenswrapper[4869]: I0314 09:00:56.703622 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:56 crc kubenswrapper[4869]: I0314 09:00:56.703757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:56 crc kubenswrapper[4869]: E0314 09:00:56.704001 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:56 crc kubenswrapper[4869]: E0314 09:00:56.704123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:56 crc kubenswrapper[4869]: E0314 09:00:56.704232 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:00:56 crc kubenswrapper[4869]: E0314 09:00:56.704609 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:57 crc kubenswrapper[4869]: I0314 09:00:57.705340 4869 scope.go:117] "RemoveContainer" containerID="10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1" Mar 14 09:00:57 crc kubenswrapper[4869]: E0314 09:00:57.821156 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.605611 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/1.log" Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.605678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerStarted","Data":"49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805"} Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.703031 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.703061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.703088 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:00:58 crc kubenswrapper[4869]: I0314 09:00:58.703033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:00:58 crc kubenswrapper[4869]: E0314 09:00:58.703249 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:00:58 crc kubenswrapper[4869]: E0314 09:00:58.703164 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:00:58 crc kubenswrapper[4869]: E0314 09:00:58.703414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:00:58 crc kubenswrapper[4869]: E0314 09:00:58.703679 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:01:00 crc kubenswrapper[4869]: I0314 09:01:00.703727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:00 crc kubenswrapper[4869]: I0314 09:01:00.703909 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:00 crc kubenswrapper[4869]: I0314 09:01:00.704026 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:00 crc kubenswrapper[4869]: E0314 09:01:00.704245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:01:00 crc kubenswrapper[4869]: I0314 09:01:00.704321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:00 crc kubenswrapper[4869]: E0314 09:01:00.704401 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:01:00 crc kubenswrapper[4869]: E0314 09:01:00.704011 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:01:00 crc kubenswrapper[4869]: E0314 09:01:00.704599 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:01:02 crc kubenswrapper[4869]: I0314 09:01:02.703194 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:02 crc kubenswrapper[4869]: E0314 09:01:02.704004 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 14 09:01:02 crc kubenswrapper[4869]: I0314 09:01:02.703192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:02 crc kubenswrapper[4869]: I0314 09:01:02.703223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:02 crc kubenswrapper[4869]: I0314 09:01:02.703192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:02 crc kubenswrapper[4869]: E0314 09:01:02.704198 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n77vq" podUID="0b5b025a-d78e-4728-b492-19846b3ad862" Mar 14 09:01:02 crc kubenswrapper[4869]: E0314 09:01:02.704258 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 14 09:01:02 crc kubenswrapper[4869]: E0314 09:01:02.704107 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.702851 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.702932 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.702875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.702915 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.708103 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.708366 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.708556 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.715602 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.715639 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.716768 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.738973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.779129 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.782171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.785942 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wx229"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.786789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.797640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.797777 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.797800 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.798457 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.798953 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.799183 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.799494 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.799648 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.799707 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rn9g2"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.799080 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.800307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.800828 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.801248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.801386 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.801942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.802312 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.802352 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.802357 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.803599 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.804336 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.804762 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.804810 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.804773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.805970 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9njzd"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.807079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.808785 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.808895 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.809305 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.809582 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.809687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.809766 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.812183 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.812617 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.812678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.812878 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.812925 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813024 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813030 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813229 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c694x"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813476 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813594 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.813940 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.814527 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qhd6d"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.815315 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5k48v"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.815929 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.816045 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.816537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.816571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.817787 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.818244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.822122 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zgn62"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.822785 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.822949 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dzfrm"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823140 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823290 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823653 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823774 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823866 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823157 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823985 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824025 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823692 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824145 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824237 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824314 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824316 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824437 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.823717 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824715 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824747 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824828 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.824932 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.826143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.827100 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.827299 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.836851 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c25vk"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.837959 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.845119 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.847006 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.851484 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.853198 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.853553 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.861952 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.862166 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.865488 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.865504 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.866927 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.867620 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.878951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.879131 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.880791 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.880960 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.881471 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.881610 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.881736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.881867 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.884453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.884490 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.884720 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.884792 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.884888 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885068 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885078 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885255 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885405 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885667 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.885773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.886774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9njzd"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.889274 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.890539 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.890845 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.892233 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.892464 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.892506 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.892621 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.892653 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.894447 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c694x"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.896383 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.896466 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wx229"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.900949 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.901238 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.901458 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.901575 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.901705 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.901745 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.906979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.909997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.911739 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.911920 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.912257 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.912350 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.912460 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.912846 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.913001 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.913629 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.914215 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.915182 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916319 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-service-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-encryption-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg9x\" (UniqueName: \"kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-node-pullsecrets\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldln\" (UniqueName: \"kubernetes.io/projected/f1daf75f-9e24-416c-b435-8c949cf6db5e-kube-api-access-hldln\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08f143d2-49b2-4bba-a5fa-a53015a6fa57-metrics-tls\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjckf\" (UniqueName: \"kubernetes.io/projected/8b763477-ddeb-476e-9734-58edd336b9e2-kube-api-access-bjckf\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqnz\" (UniqueName: \"kubernetes.io/projected/4df77986-9162-4886-944e-fb40e804f2db-kube-api-access-tfqnz\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit-dir\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916869 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aea188-0ec6-41c9-9d17-26a579ed431c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858j7\" (UniqueName: \"kubernetes.io/projected/77aea188-0ec6-41c9-9d17-26a579ed431c-kube-api-access-858j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.916968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-config\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df77986-9162-4886-944e-fb40e804f2db-serving-cert\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917053 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x5v\" (UniqueName: \"kubernetes.io/projected/08f143d2-49b2-4bba-a5fa-a53015a6fa57-kube-api-access-49x5v\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-config\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917117 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917157 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wp6\" (UniqueName: \"kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-service-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2c5\" (UniqueName: \"kubernetes.io/projected/e01443e3-18ec-4ad3-821a-14332c44fe30-kube-api-access-mh2c5\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917239 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-serving-cert\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4576dffd-6571-46e9-bb64-3add543049a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgk2\" (UniqueName: \"kubernetes.io/projected/8518e88a-aacc-484f-b82d-d55106c5bdcf-kube-api-access-hlgk2\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-config\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4576dffd-6571-46e9-bb64-3add543049a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-client\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917483 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-client\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-serving-cert\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg99k\" (UniqueName: \"kubernetes.io/projected/c1683ba1-6f04-40b1-b605-1ca997a00d59-kube-api-access-tg99k\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917624 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-image-import-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917649 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-auth-proxy-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b763477-ddeb-476e-9734-58edd336b9e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-images\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917836 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qhd6d"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9f9\" (UniqueName: \"kubernetes.io/projected/4576dffd-6571-46e9-bb64-3add543049a2-kube-api-access-sx9f9\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.917980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e01443e3-18ec-4ad3-821a-14332c44fe30-machine-approver-tls\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.918008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b763477-ddeb-476e-9734-58edd336b9e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.918024 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1683ba1-6f04-40b1-b605-1ca997a00d59-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.918039 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.918057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aea188-0ec6-41c9-9d17-26a579ed431c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.929103 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zgn62"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.929638 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.939928 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.941312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.963751 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2pnmj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.964341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.965305 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.966038 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.966056 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.966873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.967530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.968968 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rn9g2"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.972149 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.973331 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.976804 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.977818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.986422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.987162 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.988018 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.988556 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.988966 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.989061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.989103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.992020 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.992132 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.992726 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.993274 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.993488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.993608 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.994579 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.994794 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kmqk"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.995625 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.996239 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7"] Mar 14 09:01:04 crc kubenswrapper[4869]: I0314 09:01:04.999870 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.000016 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.000995 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kv4dw"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.001222 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.002100 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.002571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.003997 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.004476 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.004857 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557980-9t5kk"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.005282 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.005389 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.005558 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.005958 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.006097 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.006778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.006956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.007102 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwb49"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.008055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.008467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.009314 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c25vk"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.010283 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.011020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.011731 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.012961 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.016188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dzfrm"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.017446 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-csls9"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.018135 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.018747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4576dffd-6571-46e9-bb64-3add543049a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.018837 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgk2\" (UniqueName: \"kubernetes.io/projected/8518e88a-aacc-484f-b82d-d55106c5bdcf-kube-api-access-hlgk2\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.018959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.019072 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.019193 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.019307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-config\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.019426 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4576dffd-6571-46e9-bb64-3add543049a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.020600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q274w\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-kube-api-access-q274w\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.020712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-client\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.019028 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jwbdc"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.020545 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-config\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.021022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b622w\" (UniqueName: \"kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022084 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022100 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-csls9"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022821 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.022953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-client\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023839 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023964 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-serving-cert\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4576dffd-6571-46e9-bb64-3add543049a2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023222 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023489 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.023030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg99k\" (UniqueName: \"kubernetes.io/projected/c1683ba1-6f04-40b1-b605-1ca997a00d59-kube-api-access-tg99k\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-image-import-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-dir\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024908 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crblk\" (UniqueName: \"kubernetes.io/projected/33333edb-d3b9-49eb-acc4-bc014c8da396-kube-api-access-crblk\") pod \"downloads-7954f5f757-zgn62\" (UID: \"33333edb-d3b9-49eb-acc4-bc014c8da396\") " pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.024976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-auth-proxy-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b763477-ddeb-476e-9734-58edd336b9e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-images\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5k48v"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9f9\" (UniqueName: \"kubernetes.io/projected/4576dffd-6571-46e9-bb64-3add543049a2-kube-api-access-sx9f9\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025550 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e01443e3-18ec-4ad3-821a-14332c44fe30-auth-proxy-config\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025855 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnh5b\" (UniqueName: \"kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e01443e3-18ec-4ad3-821a-14332c44fe30-machine-approver-tls\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b763477-ddeb-476e-9734-58edd336b9e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1683ba1-6f04-40b1-b605-1ca997a00d59-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aea188-0ec6-41c9-9d17-26a579ed431c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7422b29f-2afe-4539-9c59-320e01b530b2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b763477-ddeb-476e-9734-58edd336b9e2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025948 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c1683ba1-6f04-40b1-b605-1ca997a00d59-images\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.025657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026666 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026667 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-service-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-encryption-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026868 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026912 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg9x\" (UniqueName: \"kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026981 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-node-pullsecrets\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027010 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldln\" (UniqueName: \"kubernetes.io/projected/f1daf75f-9e24-416c-b435-8c949cf6db5e-kube-api-access-hldln\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzgn\" (UniqueName: \"kubernetes.io/projected/7422b29f-2afe-4539-9c59-320e01b530b2-kube-api-access-vhzgn\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08f143d2-49b2-4bba-a5fa-a53015a6fa57-metrics-tls\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027276 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-trusted-ca\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjckf\" (UniqueName: \"kubernetes.io/projected/8b763477-ddeb-476e-9734-58edd336b9e2-kube-api-access-bjckf\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027400 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfqnz\" (UniqueName: \"kubernetes.io/projected/4df77986-9162-4886-944e-fb40e804f2db-kube-api-access-tfqnz\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027427 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit-dir\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.026664 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-client\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028204 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4576dffd-6571-46e9-bb64-3add543049a2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-node-pullsecrets\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.027873 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-client\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8518e88a-aacc-484f-b82d-d55106c5bdcf-audit-dir\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-config\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aea188-0ec6-41c9-9d17-26a579ed431c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858j7\" (UniqueName: \"kubernetes.io/projected/77aea188-0ec6-41c9-9d17-26a579ed431c-kube-api-access-858j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.028764 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-9t5kk"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029751 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029813 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-config\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029902 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-policies\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.029962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-etcd-service-ca\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77aea188-0ec6-41c9-9d17-26a579ed431c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df77986-9162-4886-944e-fb40e804f2db-serving-cert\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1daf75f-9e24-416c-b435-8c949cf6db5e-config\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-serving-cert\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030793 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kv4dw"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49x5v\" (UniqueName: \"kubernetes.io/projected/08f143d2-49b2-4bba-a5fa-a53015a6fa57-kube-api-access-49x5v\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-encryption-config\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.030993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-config\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wp6\" (UniqueName: \"kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031080 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-serving-cert\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-service-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkj5\" (UniqueName: \"kubernetes.io/projected/bfc1e74f-3dd9-4140-855b-e73396e54883-kube-api-access-xqkj5\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2c5\" (UniqueName: \"kubernetes.io/projected/e01443e3-18ec-4ad3-821a-14332c44fe30-kube-api-access-mh2c5\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-serving-cert\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-client\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031410 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4l4g\" (UniqueName: \"kubernetes.io/projected/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-kube-api-access-n4l4g\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.031418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e01443e3-18ec-4ad3-821a-14332c44fe30-machine-approver-tls\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.032094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-image-import-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.032426 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-service-ca-bundle\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.032780 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df77986-9162-4886-944e-fb40e804f2db-config\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.032807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-etcd-serving-ca\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.033291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.033386 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77aea188-0ec6-41c9-9d17-26a579ed431c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.033385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b763477-ddeb-476e-9734-58edd336b9e2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.033426 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08f143d2-49b2-4bba-a5fa-a53015a6fa57-metrics-tls\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.033652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df77986-9162-4886-944e-fb40e804f2db-serving-cert\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.034392 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.035359 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.035568 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.039251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-encryption-config\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.039306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8518e88a-aacc-484f-b82d-d55106c5bdcf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.039928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.040385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8518e88a-aacc-484f-b82d-d55106c5bdcf-serving-cert\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.045039 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.045124 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.045141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.046959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.049734 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.050137 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.054149 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kmqk"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.054200 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.058222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.058241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1683ba1-6f04-40b1-b605-1ca997a00d59-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.059452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1daf75f-9e24-416c-b435-8c949cf6db5e-serving-cert\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.062014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.066331 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.069105 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.070563 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.071803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.073008 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jwbdc"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.074167 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.075434 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.076774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.078077 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwb49"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.079579 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kvsmq"] Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.080667 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.086081 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.105583 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.125358 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kn9\" (UniqueName: \"kubernetes.io/projected/5ad6520f-5f43-465e-877b-94854b4ba96a-kube-api-access-s5kn9\") pod \"migrator-59844c95c7-g2q87\" (UID: \"5ad6520f-5f43-465e-877b-94854b4ba96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133532 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-profile-collector-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-socket-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133915 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.133994 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d051a1c-0150-43fd-b2dd-45ba5f654021-service-ca-bundle\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134016 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxjk\" (UniqueName: \"kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk\") pod \"auto-csr-approver-29557980-9t5kk\" (UID: \"db3ce98b-d0f8-4fda-84cb-390a11eb508e\") " pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b622w\" (UniqueName: \"kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crblk\" (UniqueName: \"kubernetes.io/projected/33333edb-d3b9-49eb-acc4-bc014c8da396-kube-api-access-crblk\") pod \"downloads-7954f5f757-zgn62\" (UID: \"33333edb-d3b9-49eb-acc4-bc014c8da396\") " pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134196 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134228 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-metrics-tls\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4446de34-e54f-4549-babc-9615eecc511a-signing-cabundle\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134279 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-trusted-ca\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134664 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.134695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnh5b\" (UniqueName: \"kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-registration-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-stats-auth\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7422b29f-2afe-4539-9c59-320e01b530b2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135046 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135748 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fgj6\" (UniqueName: \"kubernetes.io/projected/f844658f-e0d6-4d40-b67a-29c94cf226b0-kube-api-access-6fgj6\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135948 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqxs\" (UniqueName: \"kubernetes.io/projected/306303aa-346b-43a9-9797-f83308ea2b31-kube-api-access-6jqxs\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.135980 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136006 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prntp\" (UniqueName: \"kubernetes.io/projected/09d377fd-9022-4280-b48a-10a75f18cb67-kube-api-access-prntp\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs9v\" (UniqueName: \"kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-trusted-ca\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-config\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpswb\" (UniqueName: \"kubernetes.io/projected/b6e212ea-bda4-4257-b21c-6eadd30f6732-kube-api-access-qpswb\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-default-certificate\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-policies\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-serving-cert\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-encryption-config\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136596 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136622 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnbwq\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-kube-api-access-xnbwq\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136648 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-images\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-serving-cert\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-client\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-config\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136836 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkbks\" (UniqueName: \"kubernetes.io/projected/bf0df065-c182-44e0-84d1-f0e491baf3f5-kube-api-access-gkbks\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q274w\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-kube-api-access-q274w\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136944 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clzbr\" (UniqueName: \"kubernetes.io/projected/e3685946-eb77-4e98-bba5-d642c8697037-kube-api-access-clzbr\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.136990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4446de34-e54f-4549-babc-9615eecc511a-signing-key\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137013 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hwq\" (UniqueName: \"kubernetes.io/projected/4446de34-e54f-4549-babc-9615eecc511a-kube-api-access-p2hwq\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-dir\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137146 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137172 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0df065-c182-44e0-84d1-f0e491baf3f5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ks2f\" (UniqueName: \"kubernetes.io/projected/0bef4caa-4178-40f3-8486-a824302db6ca-kube-api-access-4ks2f\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5fg\" (UniqueName: \"kubernetes.io/projected/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-kube-api-access-dp5fg\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137310 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6e212ea-bda4-4257-b21c-6eadd30f6732-proxy-tls\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137364 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-plugins-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-csi-data-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137435 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-metrics-certs\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhzgn\" (UniqueName: \"kubernetes.io/projected/7422b29f-2afe-4539-9c59-320e01b530b2-kube-api-access-vhzgn\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137601 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-trusted-ca\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137670 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.137342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.138324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.138342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-config\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.138393 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.138594 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-srv-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139150 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sx6t\" (UniqueName: \"kubernetes.io/projected/7d051a1c-0150-43fd-b2dd-45ba5f654021-kube-api-access-7sx6t\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139217 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139247 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-mountpoint-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrzr\" (UniqueName: \"kubernetes.io/projected/2cd7688a-9024-48c6-9094-3df0aaa49aa7-kube-api-access-bvrzr\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-proxy-tls\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-tmpfs\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdbx\" (UniqueName: \"kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgsw\" (UniqueName: \"kubernetes.io/projected/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-kube-api-access-cdgsw\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4q7z\" (UniqueName: \"kubernetes.io/projected/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-kube-api-access-j4q7z\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqkj5\" (UniqueName: \"kubernetes.io/projected/bfc1e74f-3dd9-4140-855b-e73396e54883-kube-api-access-xqkj5\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140796 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-kube-api-access-qdflb\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-config\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.140920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4l4g\" (UniqueName: \"kubernetes.io/projected/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-kube-api-access-n4l4g\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.141208 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.141737 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7422b29f-2afe-4539-9c59-320e01b530b2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139722 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-dir\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139680 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139783 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.143704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-encryption-config\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.143704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-serving-cert\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.139817 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.143777 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-client\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.144128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.144158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-serving-cert\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.144633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.145615 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.145725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.145807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-audit-policies\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.145837 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.146340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.146420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.146776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.155295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfc1e74f-3dd9-4140-855b-e73396e54883-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.171327 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.172139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.185748 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.196150 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.225981 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.241884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clzbr\" (UniqueName: \"kubernetes.io/projected/e3685946-eb77-4e98-bba5-d642c8697037-kube-api-access-clzbr\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.241929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4446de34-e54f-4549-babc-9615eecc511a-signing-key\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.241952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hwq\" (UniqueName: \"kubernetes.io/projected/4446de34-e54f-4549-babc-9615eecc511a-kube-api-access-p2hwq\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.241972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.241990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0df065-c182-44e0-84d1-f0e491baf3f5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ks2f\" (UniqueName: \"kubernetes.io/projected/0bef4caa-4178-40f3-8486-a824302db6ca-kube-api-access-4ks2f\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp5fg\" (UniqueName: \"kubernetes.io/projected/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-kube-api-access-dp5fg\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6e212ea-bda4-4257-b21c-6eadd30f6732-proxy-tls\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-metrics-certs\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-plugins-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-csi-data-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-srv-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sx6t\" (UniqueName: \"kubernetes.io/projected/7d051a1c-0150-43fd-b2dd-45ba5f654021-kube-api-access-7sx6t\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242343 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-mountpoint-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrzr\" (UniqueName: \"kubernetes.io/projected/2cd7688a-9024-48c6-9094-3df0aaa49aa7-kube-api-access-bvrzr\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242377 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-proxy-tls\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-tmpfs\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdbx\" (UniqueName: \"kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242442 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgsw\" (UniqueName: \"kubernetes.io/projected/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-kube-api-access-cdgsw\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4q7z\" (UniqueName: \"kubernetes.io/projected/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-kube-api-access-j4q7z\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-kube-api-access-qdflb\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-config\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242617 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kn9\" (UniqueName: \"kubernetes.io/projected/5ad6520f-5f43-465e-877b-94854b4ba96a-kube-api-access-s5kn9\") pod \"migrator-59844c95c7-g2q87\" (UID: \"5ad6520f-5f43-465e-877b-94854b4ba96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242652 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-profile-collector-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-socket-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d051a1c-0150-43fd-b2dd-45ba5f654021-service-ca-bundle\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxjk\" (UniqueName: \"kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk\") pod \"auto-csr-approver-29557980-9t5kk\" (UID: \"db3ce98b-d0f8-4fda-84cb-390a11eb508e\") " pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-plugins-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242792 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242813 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-mountpoint-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242919 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4446de34-e54f-4549-babc-9615eecc511a-signing-cabundle\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-metrics-tls\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-registration-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-trusted-ca\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-stats-auth\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fgj6\" (UniqueName: \"kubernetes.io/projected/f844658f-e0d6-4d40-b67a-29c94cf226b0-kube-api-access-6fgj6\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqxs\" (UniqueName: \"kubernetes.io/projected/306303aa-346b-43a9-9797-f83308ea2b31-kube-api-access-6jqxs\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-tmpfs\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243178 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-registration-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.242485 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-csi-data-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2cd7688a-9024-48c6-9094-3df0aaa49aa7-socket-dir\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243212 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prntp\" (UniqueName: \"kubernetes.io/projected/09d377fd-9022-4280-b48a-10a75f18cb67-kube-api-access-prntp\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243371 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243426 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs9v\" (UniqueName: \"kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243476 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpswb\" (UniqueName: \"kubernetes.io/projected/b6e212ea-bda4-4257-b21c-6eadd30f6732-kube-api-access-qpswb\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-default-certificate\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243720 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243754 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnbwq\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-kube-api-access-xnbwq\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243815 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-images\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243908 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-config\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.243948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkbks\" (UniqueName: \"kubernetes.io/projected/bf0df065-c182-44e0-84d1-f0e491baf3f5-kube-api-access-gkbks\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.245825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.256687 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-metrics-tls\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.271497 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.274317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-trusted-ca\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.285740 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.306736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.325088 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.350954 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.358115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-default-certificate\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.365193 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.377537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-stats-auth\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.386141 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.395269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d051a1c-0150-43fd-b2dd-45ba5f654021-metrics-certs\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.406496 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.426625 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.435046 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d051a1c-0150-43fd-b2dd-45ba5f654021-service-ca-bundle\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.446112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.466163 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.485913 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.506709 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.515756 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-config\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.525883 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.546181 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.558742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.567021 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.575323 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-config\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.585567 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.606721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.617272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.626213 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.647012 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.666418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.685123 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.706702 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.718246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.726377 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.735156 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.747002 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.765939 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.775465 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6e212ea-bda4-4257-b21c-6eadd30f6732-images\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.786301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.807449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.817224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.825748 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.838070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6e212ea-bda4-4257-b21c-6eadd30f6732-proxy-tls\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.846943 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.865590 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.885678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.905129 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.916673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-srv-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.926154 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.945894 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.966418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.977574 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/306303aa-346b-43a9-9797-f83308ea2b31-profile-collector-cert\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.978133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.980750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:05 crc kubenswrapper[4869]: I0314 09:01:05.986272 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.003668 4869 request.go:700] Waited for 1.009865758s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.006669 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.026029 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.037548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-proxy-tls\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.046364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.065796 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.074241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4446de34-e54f-4549-babc-9615eecc511a-signing-cabundle\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.086661 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.105856 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.126744 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.145999 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.158390 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4446de34-e54f-4549-babc-9615eecc511a-signing-key\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.165981 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.185727 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.193997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0df065-c182-44e0-84d1-f0e491baf3f5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.206062 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.226689 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243178 4869 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243325 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert podName:f844658f-e0d6-4d40-b67a-29c94cf226b0 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.743292929 +0000 UTC m=+219.715574992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert") pod "service-ca-operator-777779d784-bgzvz" (UID: "f844658f-e0d6-4d40-b67a-29c94cf226b0") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243321 4869 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243463 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca podName:7d0b3ce9-3a56-4562-9534-dc512f82474d nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.743423373 +0000 UTC m=+219.715705466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca") pod "marketplace-operator-79b997595-fjgpv" (UID: "7d0b3ce9-3a56-4562-9534-dc512f82474d") : failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243750 4869 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243857 4869 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243870 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert podName:e3685946-eb77-4e98-bba5-d642c8697037 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.743838523 +0000 UTC m=+219.716120736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert") pod "ingress-canary-csls9" (UID: "e3685946-eb77-4e98-bba5-d642c8697037") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243905 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config podName:f844658f-e0d6-4d40-b67a-29c94cf226b0 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.743893804 +0000 UTC m=+219.716175867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config") pod "service-ca-operator-777779d784-bgzvz" (UID: "f844658f-e0d6-4d40-b67a-29c94cf226b0") : failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243935 4869 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243992 4869 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244025 4869 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.243997 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics podName:7d0b3ce9-3a56-4562-9534-dc512f82474d nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.743977687 +0000 UTC m=+219.716259950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics") pod "marketplace-operator-79b997595-fjgpv" (UID: "7d0b3ce9-3a56-4562-9534-dc512f82474d") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244063 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert podName:09d377fd-9022-4280-b48a-10a75f18cb67 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744053899 +0000 UTC m=+219.716335962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-ldfhw" (UID: "09d377fd-9022-4280-b48a-10a75f18cb67") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244088 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs podName:0bef4caa-4178-40f3-8486-a824302db6ca nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.7440782 +0000 UTC m=+219.716360273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs") pod "multus-admission-controller-857f4d67dd-kv4dw" (UID: "0bef4caa-4178-40f3-8486-a824302db6ca") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244123 4869 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244136 4869 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244190 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume podName:7e245ff0-c737-4c36-aaad-f79c24030113 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744168792 +0000 UTC m=+219.716451045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume") pod "collect-profiles-29557980-h585m" (UID: "7e245ff0-c737-4c36-aaad-f79c24030113") : failed to sync configmap cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244200 4869 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244232 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert podName:bf0df065-c182-44e0-84d1-f0e491baf3f5 nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744213753 +0000 UTC m=+219.716496026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-bzlc7" (UID: "bf0df065-c182-44e0-84d1-f0e491baf3f5") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244265 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert podName:51d0aa6a-7ea1-42c6-b81c-7cedeb75514c nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744247914 +0000 UTC m=+219.716530167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert") pod "packageserver-d55dfcdfc-nbs46" (UID: "51d0aa6a-7ea1-42c6-b81c-7cedeb75514c") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244207 4869 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244325 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert podName:51d0aa6a-7ea1-42c6-b81c-7cedeb75514c nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744309795 +0000 UTC m=+219.716592038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert") pod "packageserver-d55dfcdfc-nbs46" (UID: "51d0aa6a-7ea1-42c6-b81c-7cedeb75514c") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244441 4869 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: E0314 09:01:06.244492 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert podName:710f7c79-5b5b-496d-bd68-0b2c6ceebddf nodeName:}" failed. No retries permitted until 2026-03-14 09:01:06.744475929 +0000 UTC m=+219.716757982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert") pod "olm-operator-6b444d44fb-7slw5" (UID: "710f7c79-5b5b-496d-bd68-0b2c6ceebddf") : failed to sync secret cache: timed out waiting for the condition Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.246015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.265580 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.286175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.306138 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.334227 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.345270 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.365045 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.385691 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.405639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.425825 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.445953 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.466793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.485227 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.506091 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.526946 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.545579 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.567424 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.587139 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.605653 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.626204 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.646554 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.666997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.687111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.706912 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.726909 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.746291 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.765147 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.779605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780459 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.780917 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.781047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.781164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.781250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.781302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f844658f-e0d6-4d40-b67a-29c94cf226b0-config\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.781951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.782099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.785600 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-srv-cert\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3685946-eb77-4e98-bba5-d642c8697037-cert\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-webhook-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/09d377fd-9022-4280-b48a-10a75f18cb67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786595 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf0df065-c182-44e0-84d1-f0e491baf3f5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.786783 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f844658f-e0d6-4d40-b67a-29c94cf226b0-serving-cert\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.787622 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-apiservice-cert\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.788195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0bef4caa-4178-40f3-8486-a824302db6ca-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.808915 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgk2\" (UniqueName: \"kubernetes.io/projected/8518e88a-aacc-484f-b82d-d55106c5bdcf-kube-api-access-hlgk2\") pod \"apiserver-76f77b778f-9njzd\" (UID: \"8518e88a-aacc-484f-b82d-d55106c5bdcf\") " pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.825906 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.846320 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.866020 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.900876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg99k\" (UniqueName: \"kubernetes.io/projected/c1683ba1-6f04-40b1-b605-1ca997a00d59-kube-api-access-tg99k\") pod \"machine-api-operator-5694c8668f-wx229\" (UID: \"c1683ba1-6f04-40b1-b605-1ca997a00d59\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.920141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9f9\" (UniqueName: \"kubernetes.io/projected/4576dffd-6571-46e9-bb64-3add543049a2-kube-api-access-sx9f9\") pod \"openshift-config-operator-7777fb866f-5k48v\" (UID: \"4576dffd-6571-46e9-bb64-3add543049a2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.925646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.939942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldln\" (UniqueName: \"kubernetes.io/projected/f1daf75f-9e24-416c-b435-8c949cf6db5e-kube-api-access-hldln\") pod \"etcd-operator-b45778765-c694x\" (UID: \"f1daf75f-9e24-416c-b435-8c949cf6db5e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.967696 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg9x\" (UniqueName: \"kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x\") pod \"route-controller-manager-6576b87f9c-5m8cj\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:06 crc kubenswrapper[4869]: I0314 09:01:06.983584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjckf\" (UniqueName: \"kubernetes.io/projected/8b763477-ddeb-476e-9734-58edd336b9e2-kube-api-access-bjckf\") pod \"openshift-apiserver-operator-796bbdcf4f-wpddw\" (UID: \"8b763477-ddeb-476e-9734-58edd336b9e2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.000606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.004124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858j7\" (UniqueName: \"kubernetes.io/projected/77aea188-0ec6-41c9-9d17-26a579ed431c-kube-api-access-858j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-jwrdf\" (UID: \"77aea188-0ec6-41c9-9d17-26a579ed431c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.015786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.023388 4869 request.go:700] Waited for 1.991531546s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.031745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wp6\" (UniqueName: \"kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6\") pod \"controller-manager-879f6c89f-729jx\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.033434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.043708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.069788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.070029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2c5\" (UniqueName: \"kubernetes.io/projected/e01443e3-18ec-4ad3-821a-14332c44fe30-kube-api-access-mh2c5\") pod \"machine-approver-56656f9798-mfx9n\" (UID: \"e01443e3-18ec-4ad3-821a-14332c44fe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.074587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49x5v\" (UniqueName: \"kubernetes.io/projected/08f143d2-49b2-4bba-a5fa-a53015a6fa57-kube-api-access-49x5v\") pod \"dns-operator-744455d44c-qhd6d\" (UID: \"08f143d2-49b2-4bba-a5fa-a53015a6fa57\") " pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.086479 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.102729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfqnz\" (UniqueName: \"kubernetes.io/projected/4df77986-9162-4886-944e-fb40e804f2db-kube-api-access-tfqnz\") pod \"authentication-operator-69f744f599-rn9g2\" (UID: \"4df77986-9162-4886-944e-fb40e804f2db\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.106785 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.109284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.125655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.128430 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.165343 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b622w\" (UniqueName: \"kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w\") pod \"console-f9d7485db-plgzk\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.190124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crblk\" (UniqueName: \"kubernetes.io/projected/33333edb-d3b9-49eb-acc4-bc014c8da396-kube-api-access-crblk\") pod \"downloads-7954f5f757-zgn62\" (UID: \"33333edb-d3b9-49eb-acc4-bc014c8da396\") " pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.218309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.218988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnh5b\" (UniqueName: \"kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b\") pod \"oauth-openshift-558db77b4-c25vk\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.231165 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wx229"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.243138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q274w\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-kube-api-access-q274w\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.250858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.257540 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhzgn\" (UniqueName: \"kubernetes.io/projected/7422b29f-2afe-4539-9c59-320e01b530b2-kube-api-access-vhzgn\") pod \"cluster-samples-operator-665b6dd947-2t9jx\" (UID: \"7422b29f-2afe-4539-9c59-320e01b530b2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.263473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4l4g\" (UniqueName: \"kubernetes.io/projected/d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd-kube-api-access-n4l4g\") pod \"console-operator-58897d9998-dzfrm\" (UID: \"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd\") " pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.268308 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.280800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.281832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23a9551f-4760-4b3d-a00e-b4c2f623c0c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fgfmm\" (UID: \"23a9551f-4760-4b3d-a00e-b4c2f623c0c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.303145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqkj5\" (UniqueName: \"kubernetes.io/projected/bfc1e74f-3dd9-4140-855b-e73396e54883-kube-api-access-xqkj5\") pod \"apiserver-7bbb656c7d-p6mcj\" (UID: \"bfc1e74f-3dd9-4140-855b-e73396e54883\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.342186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clzbr\" (UniqueName: \"kubernetes.io/projected/e3685946-eb77-4e98-bba5-d642c8697037-kube-api-access-clzbr\") pod \"ingress-canary-csls9\" (UID: \"e3685946-eb77-4e98-bba5-d642c8697037\") " pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.359608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ks2f\" (UniqueName: \"kubernetes.io/projected/0bef4caa-4178-40f3-8486-a824302db6ca-kube-api-access-4ks2f\") pod \"multus-admission-controller-857f4d67dd-kv4dw\" (UID: \"0bef4caa-4178-40f3-8486-a824302db6ca\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.378635 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.402455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hwq\" (UniqueName: \"kubernetes.io/projected/4446de34-e54f-4549-babc-9615eecc511a-kube-api-access-p2hwq\") pod \"service-ca-9c57cc56f-5kmqk\" (UID: \"4446de34-e54f-4549-babc-9615eecc511a\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.414235 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.428394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp5fg\" (UniqueName: \"kubernetes.io/projected/51d0aa6a-7ea1-42c6-b81c-7cedeb75514c-kube-api-access-dp5fg\") pod \"packageserver-d55dfcdfc-nbs46\" (UID: \"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.431861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sx6t\" (UniqueName: \"kubernetes.io/projected/7d051a1c-0150-43fd-b2dd-45ba5f654021-kube-api-access-7sx6t\") pod \"router-default-5444994796-2pnmj\" (UID: \"7d051a1c-0150-43fd-b2dd-45ba5f654021\") " pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.453893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdbx\" (UniqueName: \"kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx\") pod \"collect-profiles-29557980-h585m\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.470167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrzr\" (UniqueName: \"kubernetes.io/projected/2cd7688a-9024-48c6-9094-3df0aaa49aa7-kube-api-access-bvrzr\") pod \"csi-hostpathplugin-xwb49\" (UID: \"2cd7688a-9024-48c6-9094-3df0aaa49aa7\") " pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.475327 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.482930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgsw\" (UniqueName: \"kubernetes.io/projected/710f7c79-5b5b-496d-bd68-0b2c6ceebddf-kube-api-access-cdgsw\") pod \"olm-operator-6b444d44fb-7slw5\" (UID: \"710f7c79-5b5b-496d-bd68-0b2c6ceebddf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.483417 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.483690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.484174 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-csls9" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.490479 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.496759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.506600 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.508329 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4q7z\" (UniqueName: \"kubernetes.io/projected/b7fe5a99-824d-49bc-aed1-c14fef7eddc8-kube-api-access-j4q7z\") pod \"machine-config-controller-84d6567774-pp9bj\" (UID: \"b7fe5a99-824d-49bc-aed1-c14fef7eddc8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.514784 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.525499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69ff3e78-eb90-4e0f-a99b-f80cc1c52de9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n4xl6\" (UID: \"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.535947 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.539338 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.567147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9njzd"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.567704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdflb\" (UniqueName: \"kubernetes.io/projected/d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09-kube-api-access-qdflb\") pod \"control-plane-machine-set-operator-78cbb6b69f-kz767\" (UID: \"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.572693 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5k48v"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.583569 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c694x"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.586210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.593029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxjk\" (UniqueName: \"kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk\") pod \"auto-csr-approver-29557980-9t5kk\" (UID: \"db3ce98b-d0f8-4fda-84cb-390a11eb508e\") " pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.601694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kn9\" (UniqueName: \"kubernetes.io/projected/5ad6520f-5f43-465e-877b-94854b4ba96a-kube-api-access-s5kn9\") pod \"migrator-59844c95c7-g2q87\" (UID: \"5ad6520f-5f43-465e-877b-94854b4ba96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.603965 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qhd6d"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.605126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.624626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prntp\" (UniqueName: \"kubernetes.io/projected/09d377fd-9022-4280-b48a-10a75f18cb67-kube-api-access-prntp\") pod \"package-server-manager-789f6589d5-ldfhw\" (UID: \"09d377fd-9022-4280-b48a-10a75f18cb67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.625183 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.631488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fgj6\" (UniqueName: \"kubernetes.io/projected/f844658f-e0d6-4d40-b67a-29c94cf226b0-kube-api-access-6fgj6\") pod \"service-ca-operator-777779d784-bgzvz\" (UID: \"f844658f-e0d6-4d40-b67a-29c94cf226b0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.640882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.647571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.648910 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.651816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqxs\" (UniqueName: \"kubernetes.io/projected/306303aa-346b-43a9-9797-f83308ea2b31-kube-api-access-6jqxs\") pod \"catalog-operator-68c6474976-wd9hv\" (UID: \"306303aa-346b-43a9-9797-f83308ea2b31\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.655253 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rn9g2"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.665983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" event={"ID":"c1683ba1-6f04-40b1-b605-1ca997a00d59","Type":"ContainerStarted","Data":"4b6f43e4d633bab8d4b75eaa9278bfbe4a4ee2f0a22624401e74b1e53434f9bd"} Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.666134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs9v\" (UniqueName: \"kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v\") pod \"marketplace-operator-79b997595-fjgpv\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.675339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" event={"ID":"8518e88a-aacc-484f-b82d-d55106c5bdcf","Type":"ContainerStarted","Data":"d2a100d519830bf52bf01a5f0d37d1936423c7c00f7a51bf54a44ca5de49d274"} Mar 14 09:01:07 crc kubenswrapper[4869]: W0314 09:01:07.677006 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4576dffd_6571_46e9_bb64_3add543049a2.slice/crio-e2f0c177f8096b04ba6e015183f4d0f592b19fab426562691280add52c3ec08a WatchSource:0}: Error finding container e2f0c177f8096b04ba6e015183f4d0f592b19fab426562691280add52c3ec08a: Status 404 returned error can't find the container with id e2f0c177f8096b04ba6e015183f4d0f592b19fab426562691280add52c3ec08a Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.682724 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" event={"ID":"84b93e6e-f3a8-4b32-beae-85a29e271c68","Type":"ContainerStarted","Data":"83d4b1b2a59c8771ca4d8d78ea706b9ef625a99b67fe31a10429d930f981fbc5"} Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.683226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.684737 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cfa3df2-d8c2-4ce8-88ef-31963b5e027f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7q5vd\" (UID: \"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.692086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" event={"ID":"e01443e3-18ec-4ad3-821a-14332c44fe30","Type":"ContainerStarted","Data":"c925ec76108740c8ab066a9fe189bd6b0b7723d358db437ffae47202faff327d"} Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.692482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.700930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" event={"ID":"8b763477-ddeb-476e-9734-58edd336b9e2","Type":"ContainerStarted","Data":"ff8123c5b1a2dda4a9392b82821c6a74306e5e2774b90075f52d60a438e81c5e"} Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.714740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpswb\" (UniqueName: \"kubernetes.io/projected/b6e212ea-bda4-4257-b21c-6eadd30f6732-kube-api-access-qpswb\") pod \"machine-config-operator-74547568cd-g5l86\" (UID: \"b6e212ea-bda4-4257-b21c-6eadd30f6732\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.717603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.727460 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.729062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c1cd50-7623-4fd4-aea2-012d1ff4a3a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-brsgj\" (UID: \"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.738090 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.743445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" event={"ID":"f1daf75f-9e24-416c-b435-8c949cf6db5e","Type":"ContainerStarted","Data":"57d9c49c528f621291a8d6300ecdcbd7aeb795f1592c31cdf47e33fececc3491"} Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.748972 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.751446 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kv4dw"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.758821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnbwq\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-kube-api-access-xnbwq\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.766199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4aa47dbd-0cbc-4009-9e42-22f4e4eb7828-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wknzm\" (UID: \"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.769851 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.789116 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkbks\" (UniqueName: \"kubernetes.io/projected/bf0df065-c182-44e0-84d1-f0e491baf3f5-kube-api-access-gkbks\") pod \"kube-storage-version-migrator-operator-b67b599dd-bzlc7\" (UID: \"bf0df065-c182-44e0-84d1-f0e491baf3f5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809244 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809275 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvskr\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809317 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.809474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: E0314 09:01:07.810027 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.310005341 +0000 UTC m=+221.282287394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.824735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.832220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.879028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.900842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.910650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.911137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec0ff212-a526-4ee2-8310-83def5210470-config-volume\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.911372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.911569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec0ff212-a526-4ee2-8310-83def5210470-metrics-tls\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.911608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.911823 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.912100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmpl\" (UniqueName: \"kubernetes.io/projected/ec0ff212-a526-4ee2-8310-83def5210470-kube-api-access-cxmpl\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.912236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-certs\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.926972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.927051 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.927108 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86lz\" (UniqueName: \"kubernetes.io/projected/c505ad08-8705-4153-b9a2-891de6addf95-kube-api-access-k86lz\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.927201 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvskr\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.927221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-node-bootstrap-token\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.927249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.929545 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.930574 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.930608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.932655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.934391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: E0314 09:01:07.956321 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.456269629 +0000 UTC m=+221.428551682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.956864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.962439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.963895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.970772 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.973968 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm"] Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.977944 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvskr\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.987060 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:07 crc kubenswrapper[4869]: I0314 09:01:07.987291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.030085 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-csls9"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.046589 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dzfrm"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.047757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmpl\" (UniqueName: \"kubernetes.io/projected/ec0ff212-a526-4ee2-8310-83def5210470-kube-api-access-cxmpl\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.047865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-certs\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.047916 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k86lz\" (UniqueName: \"kubernetes.io/projected/c505ad08-8705-4153-b9a2-891de6addf95-kube-api-access-k86lz\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.047941 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-node-bootstrap-token\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.048435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec0ff212-a526-4ee2-8310-83def5210470-config-volume\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.048468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.048529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec0ff212-a526-4ee2-8310-83def5210470-metrics-tls\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.052607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec0ff212-a526-4ee2-8310-83def5210470-config-volume\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.053487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec0ff212-a526-4ee2-8310-83def5210470-metrics-tls\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.053830 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.55381606 +0000 UTC m=+221.526098103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.058220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-node-bootstrap-token\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.066065 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c505ad08-8705-4153-b9a2-891de6addf95-certs\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.105078 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zgn62"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.112764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k86lz\" (UniqueName: \"kubernetes.io/projected/c505ad08-8705-4153-b9a2-891de6addf95-kube-api-access-k86lz\") pod \"machine-config-server-kvsmq\" (UID: \"c505ad08-8705-4153-b9a2-891de6addf95\") " pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.118782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmpl\" (UniqueName: \"kubernetes.io/projected/ec0ff212-a526-4ee2-8310-83def5210470-kube-api-access-cxmpl\") pod \"dns-default-jwbdc\" (UID: \"ec0ff212-a526-4ee2-8310-83def5210470\") " pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: W0314 09:01:08.142842 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3685946_eb77_4e98_bba5_d642c8697037.slice/crio-acd032f58a6f4295dcc5c97e856644d18ab6948d7efc652940b7cb820e4b380a WatchSource:0}: Error finding container acd032f58a6f4295dcc5c97e856644d18ab6948d7efc652940b7cb820e4b380a: Status 404 returned error can't find the container with id acd032f58a6f4295dcc5c97e856644d18ab6948d7efc652940b7cb820e4b380a Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.143606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kvsmq" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.150591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.151238 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.651212337 +0000 UTC m=+221.623494390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.175168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c25vk"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.231196 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kmqk"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.255905 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.256237 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.756223895 +0000 UTC m=+221.728505948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.356867 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.357069 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.857027857 +0000 UTC m=+221.829309910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.357548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.358002 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.857988421 +0000 UTC m=+221.830270484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.405791 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.458672 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.458996 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.958944237 +0000 UTC m=+221.931226290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.459204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.459740 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:08.959717857 +0000 UTC m=+221.932000100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.567213 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.568167 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.06813835 +0000 UTC m=+222.040420413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.568274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.568849 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.068839209 +0000 UTC m=+222.041121262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.588805 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.669590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.669964 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.670001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.670109 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.670154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.673727 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.173495498 +0000 UTC m=+222.145777541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.674844 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.683502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.683616 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.685870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.771163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.772244 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.272208807 +0000 UTC m=+222.244490870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.789014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.799254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" event={"ID":"e01443e3-18ec-4ad3-821a-14332c44fe30","Type":"ContainerStarted","Data":"14d748761f89b83919736fe644fd86ba4439e7751a6b3c9964a3da12fb0b892c"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.830244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" event={"ID":"23a9551f-4760-4b3d-a00e-b4c2f623c0c8","Type":"ContainerStarted","Data":"21aaf9c4d31a461f249b52d871076153f6768b9b2c33c8e0715ac682850f222e"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.830848 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.835175 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.838358 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwb49"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.843923 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" event={"ID":"ccacdee4-4ffc-4ddd-9a09-d80436e38e64","Type":"ContainerStarted","Data":"578740a07eb12321cbd001114c52ce7737433581694e5f0aee68e2f356e6b6a2"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.849067 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kvsmq" event={"ID":"c505ad08-8705-4153-b9a2-891de6addf95","Type":"ContainerStarted","Data":"314381d546fa90cc59a8f2378b6093d5629c676b4dc653c52f55c7925fe1b0a7"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.870001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" event={"ID":"4576dffd-6571-46e9-bb64-3add543049a2","Type":"ContainerStarted","Data":"e2f0c177f8096b04ba6e015183f4d0f592b19fab426562691280add52c3ec08a"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.878783 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.879601 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.379575875 +0000 UTC m=+222.351857928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.916815 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-9t5kk"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.927656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.936061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.946096 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.954282 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.966475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" event={"ID":"6d3f7d57-086d-45b5-8b44-c749f1a13821","Type":"ContainerStarted","Data":"6b6eca8bde35ce621bc2f320fe68255d1057c2dad0abc096041e2b91b9f88a50"} Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.975309 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5"] Mar 14 09:01:08 crc kubenswrapper[4869]: I0314 09:01:08.980392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:08 crc kubenswrapper[4869]: E0314 09:01:08.981535 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.481498856 +0000 UTC m=+222.453780909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.003475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" event={"ID":"f1daf75f-9e24-416c-b435-8c949cf6db5e","Type":"ContainerStarted","Data":"b610131f6080fa395765e96330d789f2a839b28bf84e9eac6f3b3e633f95d441"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.010886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2pnmj" event={"ID":"7d051a1c-0150-43fd-b2dd-45ba5f654021","Type":"ContainerStarted","Data":"3a5fcec7804fcbbca5db035f7e4f90407583e09adc76bfbeb45acdd87d51e7fa"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.022271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" event={"ID":"77aea188-0ec6-41c9-9d17-26a579ed431c","Type":"ContainerStarted","Data":"5277dc3787663339c83eb10153a304e2b0a5cd0c2fbf15e6332d567ee04203e5"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.022338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" event={"ID":"77aea188-0ec6-41c9-9d17-26a579ed431c","Type":"ContainerStarted","Data":"250754eb8fa7c348f46940999926e0a641f115ee8cd94592becb72ddc613c4ac"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.041471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" event={"ID":"08f143d2-49b2-4bba-a5fa-a53015a6fa57","Type":"ContainerStarted","Data":"b1a45cb6902211412b2c9b8eb91982546266c862b66485a9f92e155027afdd66"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.059648 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-plgzk" event={"ID":"14eab3cd-227a-4e8a-8bf1-f78ee852637c","Type":"ContainerStarted","Data":"f94c107e3a50e49507e4df30cc2c547dd004003dd07aad6152357cb45f33f281"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.062125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" event={"ID":"4df77986-9162-4886-944e-fb40e804f2db","Type":"ContainerStarted","Data":"e3bef63b045a063b96108e22639e283f0739def5669a5c10b5c39a50e369d1bd"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.070050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" event={"ID":"0bef4caa-4178-40f3-8486-a824302db6ca","Type":"ContainerStarted","Data":"b2bbc936e0c484c85a07a5773b7ab5e3db0a2cf4a7e9d802c523dca23837f403"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.084750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.090444 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.590422652 +0000 UTC m=+222.562704705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.110256 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.114235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" event={"ID":"c1683ba1-6f04-40b1-b605-1ca997a00d59","Type":"ContainerStarted","Data":"35cb2d1591101d381021bbc576e268760470463c77fa925e07b296503727fc44"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.133278 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" event={"ID":"8b763477-ddeb-476e-9734-58edd336b9e2","Type":"ContainerStarted","Data":"3098220dad7b8071658c6d7f9e0fc1684201469f0f56fa7d487bcc0ae7638790"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.144563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" event={"ID":"4446de34-e54f-4549-babc-9615eecc511a","Type":"ContainerStarted","Data":"8ac3a324a3c99d09b29aa3f67191476877903ddf099d293814d8a6fb16b904c2"} Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.145657 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod710f7c79_5b5b_496d_bd68_0b2c6ceebddf.slice/crio-5b9f1d4bfe61a8395194ba785e06a2385ec8c7a3b7d473b8b8d144f56cf1d845 WatchSource:0}: Error finding container 5b9f1d4bfe61a8395194ba785e06a2385ec8c7a3b7d473b8b8d144f56cf1d845: Status 404 returned error can't find the container with id 5b9f1d4bfe61a8395194ba785e06a2385ec8c7a3b7d473b8b8d144f56cf1d845 Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.150135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" event={"ID":"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd","Type":"ContainerStarted","Data":"a67ee223dcd59870e2af15179532f65d2775b2ddbe78f2e4af53f8f7d9ee63db"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.168649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" event={"ID":"84b93e6e-f3a8-4b32-beae-85a29e271c68","Type":"ContainerStarted","Data":"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.168847 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.175504 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5m8cj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.175675 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.186333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-csls9" event={"ID":"e3685946-eb77-4e98-bba5-d642c8697037","Type":"ContainerStarted","Data":"acd032f58a6f4295dcc5c97e856644d18ab6948d7efc652940b7cb820e4b380a"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.191552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.192960 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.692941409 +0000 UTC m=+222.665223462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.205444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" event={"ID":"7422b29f-2afe-4539-9c59-320e01b530b2","Type":"ContainerStarted","Data":"d618ef0d5531d49912f44d763d44677259202e77c0874172c449431acefce9f1"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.209527 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.211130 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.213612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zgn62" event={"ID":"33333edb-d3b9-49eb-acc4-bc014c8da396","Type":"ContainerStarted","Data":"871039aa22974c416093ff78dcfa1e1b486eb65b97a2d8c9978ad4d9976caab0"} Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.274988 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.275633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.285192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.293378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.293628 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.793575887 +0000 UTC m=+222.765857940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.293938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.297071 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.797058574 +0000 UTC m=+222.769340627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.302696 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad6520f_5f43_465e_877b_94854b4ba96a.slice/crio-c6b9953714632915947ee57d3a53bce3dcddb8e2affd207fe7a70d8bbc394ad4 WatchSource:0}: Error finding container c6b9953714632915947ee57d3a53bce3dcddb8e2affd207fe7a70d8bbc394ad4: Status 404 returned error can't find the container with id c6b9953714632915947ee57d3a53bce3dcddb8e2affd207fe7a70d8bbc394ad4 Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.337673 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6e212ea_bda4_4257_b21c_6eadd30f6732.slice/crio-33939afbbe30876f93937fac173cad0c0b37c1c4d5f27d93b140d368a110b74f WatchSource:0}: Error finding container 33939afbbe30876f93937fac173cad0c0b37c1c4d5f27d93b140d368a110b74f: Status 404 returned error can't find the container with id 33939afbbe30876f93937fac173cad0c0b37c1c4d5f27d93b140d368a110b74f Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.357079 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e245ff0_c737_4c36_aaad_f79c24030113.slice/crio-e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163 WatchSource:0}: Error finding container e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163: Status 404 returned error can't find the container with id e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163 Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.366533 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jwbdc"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.395924 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.396265 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:09.896243495 +0000 UTC m=+222.868525548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.426950 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.508737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.509113 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.009096471 +0000 UTC m=+222.981378524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.534768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.552011 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.561254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.576665 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.585623 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd"] Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.616106 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.616486 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.116472499 +0000 UTC m=+223.088754552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.616911 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.616934 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.683208 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jwrdf" podStartSLOduration=158.683182942 podStartE2EDuration="2m38.683182942s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:09.668146493 +0000 UTC m=+222.640428566" watchObservedRunningTime="2026-03-14 09:01:09.683182942 +0000 UTC m=+222.655464985" Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.708986 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec0ff212_a526_4ee2_8310_83def5210470.slice/crio-63bda0da21592c8f6fd988d405483ff81110c1a3d1d106180f60272b463fef42 WatchSource:0}: Error finding container 63bda0da21592c8f6fd988d405483ff81110c1a3d1d106180f60272b463fef42: Status 404 returned error can't find the container with id 63bda0da21592c8f6fd988d405483ff81110c1a3d1d106180f60272b463fef42 Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.716296 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306303aa_346b_43a9_9797_f83308ea2b31.slice/crio-36dda691f4a5979487ef3319a30735a65b72d50604e52cf3cfab10d8e84ef33e WatchSource:0}: Error finding container 36dda691f4a5979487ef3319a30735a65b72d50604e52cf3cfab10d8e84ef33e: Status 404 returned error can't find the container with id 36dda691f4a5979487ef3319a30735a65b72d50604e52cf3cfab10d8e84ef33e Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.719749 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.720276 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.220255057 +0000 UTC m=+223.192537110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.806651 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" podStartSLOduration=157.806621965 podStartE2EDuration="2m37.806621965s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:09.7200031 +0000 UTC m=+222.692285153" watchObservedRunningTime="2026-03-14 09:01:09.806621965 +0000 UTC m=+222.778904028" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.808124 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c694x" podStartSLOduration=158.808116493 podStartE2EDuration="2m38.808116493s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:09.803359993 +0000 UTC m=+222.775642066" watchObservedRunningTime="2026-03-14 09:01:09.808116493 +0000 UTC m=+222.780398546" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.825664 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wpddw" podStartSLOduration=158.825622265 podStartE2EDuration="2m38.825622265s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:09.821541452 +0000 UTC m=+222.793823505" watchObservedRunningTime="2026-03-14 09:01:09.825622265 +0000 UTC m=+222.797904328" Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.826648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.826949 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.326919276 +0000 UTC m=+223.299201329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.827142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.828953 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.328906637 +0000 UTC m=+223.301188700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: I0314 09:01:09.929217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:09 crc kubenswrapper[4869]: E0314 09:01:09.929884 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.429856123 +0000 UTC m=+223.402138176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.951155 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-009610ef89e8488de1fafd50eea13bccd82a48e3760271eb19cb7a9d3e7b5cdf WatchSource:0}: Error finding container 009610ef89e8488de1fafd50eea13bccd82a48e3760271eb19cb7a9d3e7b5cdf: Status 404 returned error can't find the container with id 009610ef89e8488de1fafd50eea13bccd82a48e3760271eb19cb7a9d3e7b5cdf Mar 14 09:01:09 crc kubenswrapper[4869]: W0314 09:01:09.958539 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4b4cc624c93efd0454af55d4412b7e8bb37a8e5b9cb41226d40a06a2bc559b8e WatchSource:0}: Error finding container 4b4cc624c93efd0454af55d4412b7e8bb37a8e5b9cb41226d40a06a2bc559b8e: Status 404 returned error can't find the container with id 4b4cc624c93efd0454af55d4412b7e8bb37a8e5b9cb41226d40a06a2bc559b8e Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.031660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.032230 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.532211124 +0000 UTC m=+223.504493167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.132758 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.132977 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.632944485 +0000 UTC m=+223.605226538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.219308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" event={"ID":"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f","Type":"ContainerStarted","Data":"74f665577d03745e01d0abc88d1306563fd12d06bd3aaaa89e09ce51025d05a2"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.220291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" event={"ID":"f844658f-e0d6-4d40-b67a-29c94cf226b0","Type":"ContainerStarted","Data":"53d271a205e90510a85adc88dc99791035d31d456fb43768e4fed96151984309"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.221103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"516ee5786dda98e19fee7526516d2355d43a6f7075fb8aeeb5c0230fea7712d8"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.222250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4b4cc624c93efd0454af55d4412b7e8bb37a8e5b9cb41226d40a06a2bc559b8e"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.223415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" event={"ID":"ccacdee4-4ffc-4ddd-9a09-d80436e38e64","Type":"ContainerStarted","Data":"afbd9fb2e4d0ed11c8114a21bad751d2aed10068efb864b501af6b1e05a9cfd2"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.223856 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.225159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kvsmq" event={"ID":"c505ad08-8705-4153-b9a2-891de6addf95","Type":"ContainerStarted","Data":"d8b84b82cbe4d4525a074d2a0d74fd551aa487e4e6e2ff28320dcc7927b537f3"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.227042 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-729jx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.227143 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.229162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"009610ef89e8488de1fafd50eea13bccd82a48e3760271eb19cb7a9d3e7b5cdf"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.230916 4869 generic.go:334] "Generic (PLEG): container finished" podID="8518e88a-aacc-484f-b82d-d55106c5bdcf" containerID="83802b70bd4808d294da1348aef9e90bf362619b71583375cfa7a66936da9a8f" exitCode=0 Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.231332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" event={"ID":"8518e88a-aacc-484f-b82d-d55106c5bdcf","Type":"ContainerDied","Data":"83802b70bd4808d294da1348aef9e90bf362619b71583375cfa7a66936da9a8f"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.232395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" event={"ID":"09d377fd-9022-4280-b48a-10a75f18cb67","Type":"ContainerStarted","Data":"6762c2be5be93f7ba02ca9d380a4257795c65c46b88064a4751038e327071341"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.233662 4869 generic.go:334] "Generic (PLEG): container finished" podID="4576dffd-6571-46e9-bb64-3add543049a2" containerID="e9e4ef15759267cd7df96a73bbaa6d28749d4e5057459bb63074c9ef6ba8959d" exitCode=0 Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.233728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" event={"ID":"4576dffd-6571-46e9-bb64-3add543049a2","Type":"ContainerDied","Data":"e9e4ef15759267cd7df96a73bbaa6d28749d4e5057459bb63074c9ef6ba8959d"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.233944 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.235456 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.735441989 +0000 UTC m=+223.707724112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.236363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" event={"ID":"b6e212ea-bda4-4257-b21c-6eadd30f6732","Type":"ContainerStarted","Data":"33939afbbe30876f93937fac173cad0c0b37c1c4d5f27d93b140d368a110b74f"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.237342 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" event={"ID":"710f7c79-5b5b-496d-bd68-0b2c6ceebddf","Type":"ContainerStarted","Data":"5b9f1d4bfe61a8395194ba785e06a2385ec8c7a3b7d473b8b8d144f56cf1d845"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.240323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" event={"ID":"c1683ba1-6f04-40b1-b605-1ca997a00d59","Type":"ContainerStarted","Data":"6cf76055747fa9fd271c5b8c9dad2cce63ca2b5be978f7b013d1b5a1c8dcb0f1"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.240766 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" podStartSLOduration=159.240745853 podStartE2EDuration="2m39.240745853s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:10.240314502 +0000 UTC m=+223.212596575" watchObservedRunningTime="2026-03-14 09:01:10.240745853 +0000 UTC m=+223.213027906" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.241703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" event={"ID":"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828","Type":"ContainerStarted","Data":"d766cd65cc4bfaa485fd256a513938dd93d35b9097e988ccb5fd0d0c38f740c1"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.244789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" event={"ID":"bfc1e74f-3dd9-4140-855b-e73396e54883","Type":"ContainerStarted","Data":"38689e6796e3ef165201581ec5dbee7963ce1435a103c1a9ed31ace86695f710"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.248388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" event={"ID":"db3ce98b-d0f8-4fda-84cb-390a11eb508e","Type":"ContainerStarted","Data":"9400c16561ce2d610a5c770a3716236743a27b4b9214af7f317a5465ff337903"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.249114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" event={"ID":"bf0df065-c182-44e0-84d1-f0e491baf3f5","Type":"ContainerStarted","Data":"efc79ca43b043874406742efa79fa86af8c9a0f37536cc41058387f5985b806e"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.249694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jwbdc" event={"ID":"ec0ff212-a526-4ee2-8310-83def5210470","Type":"ContainerStarted","Data":"63bda0da21592c8f6fd988d405483ff81110c1a3d1d106180f60272b463fef42"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.250654 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" event={"ID":"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4","Type":"ContainerStarted","Data":"abae354e8a1fe3d315a73edb7970159a2059fa9807989d21b6f0f621c1f5b290"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.252312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" event={"ID":"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9","Type":"ContainerStarted","Data":"2a290e8a36ab8d9d3ba738d276d9de50219dd064d10c855890055ed1504398b9"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.255392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" event={"ID":"7e245ff0-c737-4c36-aaad-f79c24030113","Type":"ContainerStarted","Data":"e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.256842 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" event={"ID":"306303aa-346b-43a9-9797-f83308ea2b31","Type":"ContainerStarted","Data":"36dda691f4a5979487ef3319a30735a65b72d50604e52cf3cfab10d8e84ef33e"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.259730 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" event={"ID":"d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd","Type":"ContainerStarted","Data":"409e62ad35dc90c1b8b6f5012d5eb42fe99a68e772fd9a146e7dd2a55388cb29"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.260932 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.262102 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dzfrm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.262151 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" podUID="d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.265479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" event={"ID":"2cd7688a-9024-48c6-9094-3df0aaa49aa7","Type":"ContainerStarted","Data":"8c21a3964776145eabae6a40f3f7c19b91e898dc452511a8ae0608960bf0f8d4"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.268337 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" event={"ID":"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c","Type":"ContainerStarted","Data":"b4decaf4a8eda1667026428ec972b9e83565e227f6a1333a3d53d38243fafbce"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.270963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" event={"ID":"5ad6520f-5f43-465e-877b-94854b4ba96a","Type":"ContainerStarted","Data":"c6b9953714632915947ee57d3a53bce3dcddb8e2affd207fe7a70d8bbc394ad4"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.272051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" event={"ID":"7d0b3ce9-3a56-4562-9534-dc512f82474d","Type":"ContainerStarted","Data":"3e2c93f7d0ab0355d440398462406b5d3376b2ea504710cce3c77298e975b23e"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.273404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" event={"ID":"4df77986-9162-4886-944e-fb40e804f2db","Type":"ContainerStarted","Data":"6cb6558e68eee8ff0d4fc95b4c45e19528a839cc4caeafca66ca7b99da3d607f"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.275306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" event={"ID":"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09","Type":"ContainerStarted","Data":"bce93d7d744238ab90d64e4866a360632c5dd5f5600fc9f698a5bb36ae2eda13"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.276443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" event={"ID":"b7fe5a99-824d-49bc-aed1-c14fef7eddc8","Type":"ContainerStarted","Data":"9be3b9ebd4ba90467c6f9c46d292d60e1f59e43774565060df10104d675664b8"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.278361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" event={"ID":"23a9551f-4760-4b3d-a00e-b4c2f623c0c8","Type":"ContainerStarted","Data":"6a33f2dc6f1184c6e6206f93784a1ecc2e3cdee40be2657206ee77cc06c7a3b4"} Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.280245 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5m8cj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.280301 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.302012 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rn9g2" podStartSLOduration=159.301984728 podStartE2EDuration="2m39.301984728s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:10.301345531 +0000 UTC m=+223.273627584" watchObservedRunningTime="2026-03-14 09:01:10.301984728 +0000 UTC m=+223.274266781" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.318408 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wx229" podStartSLOduration=158.318383751 podStartE2EDuration="2m38.318383751s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:10.317628012 +0000 UTC m=+223.289910095" watchObservedRunningTime="2026-03-14 09:01:10.318383751 +0000 UTC m=+223.290665804" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.337181 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.338498 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.838446877 +0000 UTC m=+223.810728930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.362302 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fgfmm" podStartSLOduration=159.362282359 podStartE2EDuration="2m39.362282359s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:10.344025618 +0000 UTC m=+223.316307671" watchObservedRunningTime="2026-03-14 09:01:10.362282359 +0000 UTC m=+223.334564412" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.366336 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" podStartSLOduration=159.36629007 podStartE2EDuration="2m39.36629007s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:10.359379195 +0000 UTC m=+223.331661248" watchObservedRunningTime="2026-03-14 09:01:10.36629007 +0000 UTC m=+223.338572153" Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.439364 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.439843 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:10.939807873 +0000 UTC m=+223.912089926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.540382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.540893 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.040858682 +0000 UTC m=+224.013140735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.541233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.541723 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.041713383 +0000 UTC m=+224.013995436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.642530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.642957 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.142926916 +0000 UTC m=+224.115208969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.744653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.745480 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.245468092 +0000 UTC m=+224.217750145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.846665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.846799 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.346781697 +0000 UTC m=+224.319063750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.846820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.847122 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.347114645 +0000 UTC m=+224.319396698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:10 crc kubenswrapper[4869]: I0314 09:01:10.947731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:10 crc kubenswrapper[4869]: E0314 09:01:10.948394 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.448378549 +0000 UTC m=+224.420660602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.050487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.050953 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.550930816 +0000 UTC m=+224.523212929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.055072 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.152472 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.152647 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.65260623 +0000 UTC m=+224.624888293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.155431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.155931 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.655912334 +0000 UTC m=+224.628194397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.258377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.258875 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.758855929 +0000 UTC m=+224.731137982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.317074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zgn62" event={"ID":"33333edb-d3b9-49eb-acc4-bc014c8da396","Type":"ContainerStarted","Data":"cfd3bd0436a655abe42ee1dd47205ba9463708b022564b1b77a10e8250de197b"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.319234 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.334913 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.335006 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.351197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" event={"ID":"69ff3e78-eb90-4e0f-a99b-f80cc1c52de9","Type":"ContainerStarted","Data":"5ec2bc894841a8e4dc930de69f2a9dc8f7d4749166676a4b9e44c2e8869175b1"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.356334 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" event={"ID":"7e245ff0-c737-4c36-aaad-f79c24030113","Type":"ContainerStarted","Data":"932577e865feba107d8a6f5f38eb8b43a074fe7e15cc5c0ff3190af7e9f2ce9c"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.360025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.360453 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.860437142 +0000 UTC m=+224.832719195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.367796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" event={"ID":"08f143d2-49b2-4bba-a5fa-a53015a6fa57","Type":"ContainerStarted","Data":"81fb524a7a206c19ff247fdda5e0477288f74d0476c2a259e3c6ded3f6861e7f"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.371863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" event={"ID":"5ad6520f-5f43-465e-877b-94854b4ba96a","Type":"ContainerStarted","Data":"6a7211381fd378133709430615dbeb1b3b6f135c2ebf54109d99a3c1cc0e6bfd"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.377559 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n4xl6" podStartSLOduration=159.377537053 podStartE2EDuration="2m39.377537053s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.376809484 +0000 UTC m=+224.349091557" watchObservedRunningTime="2026-03-14 09:01:11.377537053 +0000 UTC m=+224.349819106" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.377821 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zgn62" podStartSLOduration=160.377810619 podStartE2EDuration="2m40.377810619s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.354265075 +0000 UTC m=+224.326547138" watchObservedRunningTime="2026-03-14 09:01:11.377810619 +0000 UTC m=+224.350092682" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.426019 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2pnmj" event={"ID":"7d051a1c-0150-43fd-b2dd-45ba5f654021","Type":"ContainerStarted","Data":"db90952828ed9bc73c4e7c19fdcd8436af449af0b10a3aef5f612bbb5c77d074"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.429594 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" podStartSLOduration=71.429491313 podStartE2EDuration="1m11.429491313s" podCreationTimestamp="2026-03-14 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.428084577 +0000 UTC m=+224.400366640" watchObservedRunningTime="2026-03-14 09:01:11.429491313 +0000 UTC m=+224.401773376" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.438999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" event={"ID":"0bef4caa-4178-40f3-8486-a824302db6ca","Type":"ContainerStarted","Data":"325a655a48e29e44d71e225d91afdef4a02dc9e560e8d75eef84c88b176c2266"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.459881 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2pnmj" podStartSLOduration=160.459856839 podStartE2EDuration="2m40.459856839s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.458996747 +0000 UTC m=+224.431278810" watchObservedRunningTime="2026-03-14 09:01:11.459856839 +0000 UTC m=+224.432138892" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.461640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.466191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" event={"ID":"b6e212ea-bda4-4257-b21c-6eadd30f6732","Type":"ContainerStarted","Data":"d36dd8d4ed221d2e1b5e719379e0194bdb2468c1763ea666a059758c1dcb4378"} Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.466359 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:11.966334572 +0000 UTC m=+224.938616615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.481364 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" event={"ID":"4446de34-e54f-4549-babc-9615eecc511a","Type":"ContainerStarted","Data":"ecc2a58bf2c07b804a3e38a53cafbdee2f509a2bd11218658ff2d7f1a2adfc68"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.495829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-csls9" event={"ID":"e3685946-eb77-4e98-bba5-d642c8697037","Type":"ContainerStarted","Data":"2d7ca984e757eb7076404eb22805b75f14500c1470c10d1e9f131aebe7b89a08"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.502729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" event={"ID":"7d0b3ce9-3a56-4562-9534-dc512f82474d","Type":"ContainerStarted","Data":"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.503831 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.507045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" event={"ID":"7422b29f-2afe-4539-9c59-320e01b530b2","Type":"ContainerStarted","Data":"e6ba1ed08d7d383203e3897b5a39916094586c9d41db94d57313c67e5ce95c44"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.511469 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5kmqk" podStartSLOduration=159.51143513 podStartE2EDuration="2m39.51143513s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.500220607 +0000 UTC m=+224.472502660" watchObservedRunningTime="2026-03-14 09:01:11.51143513 +0000 UTC m=+224.483717173" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.518409 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjgpv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.518477 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.523427 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" event={"ID":"d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09","Type":"ContainerStarted","Data":"d8f66f47342720bfe8cfc59b06d978edef97fa7a25e840c5a66a2536367b22f9"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.531713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" event={"ID":"4576dffd-6571-46e9-bb64-3add543049a2","Type":"ContainerStarted","Data":"b6f35ae559d4ee184631d64de0d86e0224614c654d08bb9e59b053150207a75d"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.532293 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.534184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-plgzk" event={"ID":"14eab3cd-227a-4e8a-8bf1-f78ee852637c","Type":"ContainerStarted","Data":"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.539253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" event={"ID":"710f7c79-5b5b-496d-bd68-0b2c6ceebddf","Type":"ContainerStarted","Data":"3de68be33468e215006783c06c09eabd903337165bc9afce710aeefe3731d24b"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.539777 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.540279 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.543307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" event={"ID":"09d377fd-9022-4280-b48a-10a75f18cb67","Type":"ContainerStarted","Data":"69f2db8abee18e1136102cfdec87d17f911de35c4ddc8e007e256000c8f63057"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.543565 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.543614 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.547097 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-csls9" podStartSLOduration=7.547068758 podStartE2EDuration="7.547068758s" podCreationTimestamp="2026-03-14 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.546164425 +0000 UTC m=+224.518446498" watchObservedRunningTime="2026-03-14 09:01:11.547068758 +0000 UTC m=+224.519350811" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.549471 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7slw5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.549564 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" podUID="710f7c79-5b5b-496d-bd68-0b2c6ceebddf" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.562773 4869 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e74f-3dd9-4140-855b-e73396e54883" containerID="73e5f3d148e32234b80a50f14df3e6bdd3b7174944516382cd373baf9dbde165" exitCode=0 Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.563213 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" event={"ID":"bfc1e74f-3dd9-4140-855b-e73396e54883","Type":"ContainerDied","Data":"73e5f3d148e32234b80a50f14df3e6bdd3b7174944516382cd373baf9dbde165"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.563958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.564355 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.064338063 +0000 UTC m=+225.036620116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.580310 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" event={"ID":"51d0aa6a-7ea1-42c6-b81c-7cedeb75514c","Type":"ContainerStarted","Data":"71100215e968f77113e3a434866779d79d53dbcec6290a1de0515e618f51db7d"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.581352 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.588129 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nbs46 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.588206 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" podUID="51d0aa6a-7ea1-42c6-b81c-7cedeb75514c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.594055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" event={"ID":"f844658f-e0d6-4d40-b67a-29c94cf226b0","Type":"ContainerStarted","Data":"cf4639d6d4668767ad58d7c70a4365970a7699f291ed80ae95d555a0510766e1"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.597419 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kz767" podStartSLOduration=159.597390537 podStartE2EDuration="2m39.597390537s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.572256763 +0000 UTC m=+224.544538816" watchObservedRunningTime="2026-03-14 09:01:11.597390537 +0000 UTC m=+224.569672600" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.605857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" event={"ID":"e01443e3-18ec-4ad3-821a-14332c44fe30","Type":"ContainerStarted","Data":"d58ab8a8751a147c81cff3211243f3e0d5cef9464acc1cacef4f4b2d2ef7cba4"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.610008 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" event={"ID":"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828","Type":"ContainerStarted","Data":"faf8166173ac91b9ec7fbdbf6c24518f05295144408e2be316b4549f6e949cc1"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.632019 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-plgzk" podStartSLOduration=160.63198852 podStartE2EDuration="2m40.63198852s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.622341407 +0000 UTC m=+224.594623480" watchObservedRunningTime="2026-03-14 09:01:11.63198852 +0000 UTC m=+224.604270583" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.642890 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" event={"ID":"306303aa-346b-43a9-9797-f83308ea2b31","Type":"ContainerStarted","Data":"a0bc4d31ee8e689de7f53e8a94aa860853ab0631a74697d1536dbed8e229644c"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.643367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.648451 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wd9hv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.648534 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" podUID="306303aa-346b-43a9-9797-f83308ea2b31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.651108 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" podStartSLOduration=159.651089541 podStartE2EDuration="2m39.651089541s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.648075495 +0000 UTC m=+224.620357548" watchObservedRunningTime="2026-03-14 09:01:11.651089541 +0000 UTC m=+224.623371594" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.657920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" event={"ID":"b7fe5a99-824d-49bc-aed1-c14fef7eddc8","Type":"ContainerStarted","Data":"f9e61ec7066fb92f44986786a7c67d7827d36eed5ff0a419c1668b052c43fb19"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.659910 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" event={"ID":"bf0df065-c182-44e0-84d1-f0e491baf3f5","Type":"ContainerStarted","Data":"e41bf48a29271805e720522942d0a5d0da639968b0c34748ecccca238f925e74"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.665113 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.666011 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.165974717 +0000 UTC m=+225.138256770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.667094 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" podStartSLOduration=159.667071755 podStartE2EDuration="2m39.667071755s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.666581212 +0000 UTC m=+224.638863275" watchObservedRunningTime="2026-03-14 09:01:11.667071755 +0000 UTC m=+224.639353808" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.667392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.668545 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.168532082 +0000 UTC m=+225.140814135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.670544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" event={"ID":"6d3f7d57-086d-45b5-8b44-c749f1a13821","Type":"ContainerStarted","Data":"9bbfd92af0bceb71cf99da603f13f2ac57873eeb70ce3e11de8a03402b255d22"} Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.671301 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-729jx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.671340 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.671385 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.679788 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dzfrm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.679893 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" podUID="d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.684403 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c25vk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.684471 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.690415 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" podStartSLOduration=160.690398673 podStartE2EDuration="2m40.690398673s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.688925586 +0000 UTC m=+224.661207639" watchObservedRunningTime="2026-03-14 09:01:11.690398673 +0000 UTC m=+224.662680726" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.736111 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kvsmq" podStartSLOduration=7.736092985 podStartE2EDuration="7.736092985s" podCreationTimestamp="2026-03-14 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.73192872 +0000 UTC m=+224.704210773" watchObservedRunningTime="2026-03-14 09:01:11.736092985 +0000 UTC m=+224.708375038" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.771576 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bgzvz" podStartSLOduration=159.771543289 podStartE2EDuration="2m39.771543289s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.752596972 +0000 UTC m=+224.724879045" watchObservedRunningTime="2026-03-14 09:01:11.771543289 +0000 UTC m=+224.743825362" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.773197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.776128 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mfx9n" podStartSLOduration=160.776117385 podStartE2EDuration="2m40.776117385s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.77237581 +0000 UTC m=+224.744657883" watchObservedRunningTime="2026-03-14 09:01:11.776117385 +0000 UTC m=+224.748399458" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.777692 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.277653204 +0000 UTC m=+225.249935257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.825974 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bzlc7" podStartSLOduration=159.825954711 podStartE2EDuration="2m39.825954711s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.824770771 +0000 UTC m=+224.797052824" watchObservedRunningTime="2026-03-14 09:01:11.825954711 +0000 UTC m=+224.798236764" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.829567 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" podStartSLOduration=159.829557702 podStartE2EDuration="2m39.829557702s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.797619137 +0000 UTC m=+224.769901200" watchObservedRunningTime="2026-03-14 09:01:11.829557702 +0000 UTC m=+224.801839765" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.878082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.880109 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.380093767 +0000 UTC m=+225.352375820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.880186 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" podStartSLOduration=160.880164909 podStartE2EDuration="2m40.880164909s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.879675037 +0000 UTC m=+224.851957110" watchObservedRunningTime="2026-03-14 09:01:11.880164909 +0000 UTC m=+224.852446962" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.922335 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" podStartSLOduration=159.922299211 podStartE2EDuration="2m39.922299211s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:11.921754258 +0000 UTC m=+224.894036321" watchObservedRunningTime="2026-03-14 09:01:11.922299211 +0000 UTC m=+224.894581264" Mar 14 09:01:11 crc kubenswrapper[4869]: I0314 09:01:11.981869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:11 crc kubenswrapper[4869]: E0314 09:01:11.982288 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.482264864 +0000 UTC m=+225.454546917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.083460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.084266 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.584235045 +0000 UTC m=+225.556517098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.185767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.185992 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.685977351 +0000 UTC m=+225.658259404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.288111 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.288582 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.788564799 +0000 UTC m=+225.760846852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.388877 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.389242 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.889211737 +0000 UTC m=+225.861493800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.490710 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.491251 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:12.991232799 +0000 UTC m=+225.963514852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.552821 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.552881 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.591463 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.591493 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.091472657 +0000 UTC m=+226.063754710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.591799 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.592117 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.092108313 +0000 UTC m=+226.064390366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.678994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d4793a0eddf807b768b8a191314721ca3c0dd662283cf38b409f3408969fb4a5"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.679887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.682638 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" event={"ID":"5ad6520f-5f43-465e-877b-94854b4ba96a","Type":"ContainerStarted","Data":"dc731657d54633ebdd02b5ebe00a6df7d2913414ee7d63f804dd63aeebc1420b"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.687034 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" event={"ID":"b7fe5a99-824d-49bc-aed1-c14fef7eddc8","Type":"ContainerStarted","Data":"3c050388723bcb4a25bbf4dfd54f3321c9a469bda10c2582a3bf02131b171757"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.691029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" event={"ID":"7422b29f-2afe-4539-9c59-320e01b530b2","Type":"ContainerStarted","Data":"e23c71644314230ba9dcbd522c04d5cfd6c2cf355bd46e341aa222479377c90c"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.695525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.695754 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.195718596 +0000 UTC m=+226.168000649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.696324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.696787 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.196779063 +0000 UTC m=+226.169061116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.700297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" event={"ID":"4aa47dbd-0cbc-4009-9e42-22f4e4eb7828","Type":"ContainerStarted","Data":"45f3f26391ee51046c449cf0f2523146308518974bdf7cfca03b714480a14384"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.734096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" event={"ID":"bfc1e74f-3dd9-4140-855b-e73396e54883","Type":"ContainerStarted","Data":"a86d1b432e62847a1d035a3e9982a68a2c8124aeaba71d532ec706661ea2f0f8"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.734461 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pp9bj" podStartSLOduration=160.734435103 podStartE2EDuration="2m40.734435103s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.72558843 +0000 UTC m=+225.697870493" watchObservedRunningTime="2026-03-14 09:01:12.734435103 +0000 UTC m=+225.706717156" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.737224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" event={"ID":"b6e212ea-bda4-4257-b21c-6eadd30f6732","Type":"ContainerStarted","Data":"805eb2c37d678fc9596e30f28fdee9bb442077bf484fa54c2176e89123fc5e3f"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.746070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6e86005d68db48775fd611ec154f8f250acc29efa55522687caddd83069a7360"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.756931 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wknzm" podStartSLOduration=161.756908929 podStartE2EDuration="2m41.756908929s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.754092988 +0000 UTC m=+225.726375041" watchObservedRunningTime="2026-03-14 09:01:12.756908929 +0000 UTC m=+225.729190982" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.767885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jwbdc" event={"ID":"ec0ff212-a526-4ee2-8310-83def5210470","Type":"ContainerStarted","Data":"46540cb574dda56e8658b20d35e9918aeaa9bfc20ce5543871fd1835461646ab"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.767955 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jwbdc" event={"ID":"ec0ff212-a526-4ee2-8310-83def5210470","Type":"ContainerStarted","Data":"26f73f88928e6a7cd48c4717af648e7239a91393f74a931204f12cbd597e62fb"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.768919 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.780129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" event={"ID":"0bef4caa-4178-40f3-8486-a824302db6ca","Type":"ContainerStarted","Data":"17a926f203102251b4d17e1181c998533122236c7c92579052bd3042a0c6ed63"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.783090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" event={"ID":"09d377fd-9022-4280-b48a-10a75f18cb67","Type":"ContainerStarted","Data":"b86f0028f9e3faafa21194f85125e69c7c1f279a772033a1684e4898bac89651"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.783622 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.784459 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2t9jx" podStartSLOduration=161.784427634 podStartE2EDuration="2m41.784427634s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.778654018 +0000 UTC m=+225.750936071" watchObservedRunningTime="2026-03-14 09:01:12.784427634 +0000 UTC m=+225.756709677" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.788157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" event={"ID":"8518e88a-aacc-484f-b82d-d55106c5bdcf","Type":"ContainerStarted","Data":"10ffe9a41881f829ea0c4a396696412fb6fe40cb424925488413a9e4743094c4"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.788207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" event={"ID":"8518e88a-aacc-484f-b82d-d55106c5bdcf","Type":"ContainerStarted","Data":"ee3a39799744ed71517712f1fdda5338de100419706873c28108f08af4fc5621"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.798477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.798628 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.298599911 +0000 UTC m=+226.270881964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.799425 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.803389 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.303359481 +0000 UTC m=+226.275641534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.808792 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" event={"ID":"5cfa3df2-d8c2-4ce8-88ef-31963b5e027f","Type":"ContainerStarted","Data":"98cd0b1387d1e01e052c7403f18022a734fd4e16c45f9462ea3a84b8763e0138"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.830261 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g2q87" podStartSLOduration=160.830238328 podStartE2EDuration="2m40.830238328s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.803311439 +0000 UTC m=+225.775593502" watchObservedRunningTime="2026-03-14 09:01:12.830238328 +0000 UTC m=+225.802520381" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.844143 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" event={"ID":"08f143d2-49b2-4bba-a5fa-a53015a6fa57","Type":"ContainerStarted","Data":"9c6acb94cf22d1f1ec5ebcce58ad9b07b04bc60d9b91da86a797fe8bb623117c"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.853037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6f835615011f83cc6ce6d235bf0985c05fa84a37d1228dd5757b3f0f5d713c76"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.868325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" event={"ID":"12c1cd50-7623-4fd4-aea2-012d1ff4a3a4","Type":"ContainerStarted","Data":"b23e2b1a5500a1335b999e39363edd54b4e7631d8ecad1d024c05f2e92ded4be"} Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869317 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nbs46 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869386 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" podUID="51d0aa6a-7ea1-42c6-b81c-7cedeb75514c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869830 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjgpv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869905 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7slw5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869904 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.869927 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" podUID="710f7c79-5b5b-496d-bd68-0b2c6ceebddf" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870276 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dzfrm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870298 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" podUID="d72c03d3-b35f-4e62-a2ff-6c5a9743c9cd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870356 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870376 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870431 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wd9hv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.870456 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" podUID="306303aa-346b-43a9-9797-f83308ea2b31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.871223 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c25vk container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.871265 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.872893 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kv4dw" podStartSLOduration=160.872865114 podStartE2EDuration="2m40.872865114s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.830605308 +0000 UTC m=+225.802887381" watchObservedRunningTime="2026-03-14 09:01:12.872865114 +0000 UTC m=+225.845147177" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.873392 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" podStartSLOduration=160.873385127 podStartE2EDuration="2m40.873385127s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.867011857 +0000 UTC m=+225.839293910" watchObservedRunningTime="2026-03-14 09:01:12.873385127 +0000 UTC m=+225.845667190" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.895194 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g5l86" podStartSLOduration=160.895176927 podStartE2EDuration="2m40.895176927s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.891621807 +0000 UTC m=+225.863904070" watchObservedRunningTime="2026-03-14 09:01:12.895176927 +0000 UTC m=+225.867458980" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.901293 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.901571 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.401484545 +0000 UTC m=+226.373766598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.904172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:12 crc kubenswrapper[4869]: E0314 09:01:12.905683 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.405668091 +0000 UTC m=+226.377950144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.926553 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" podStartSLOduration=160.926500476 podStartE2EDuration="2m40.926500476s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.925788969 +0000 UTC m=+225.898071022" watchObservedRunningTime="2026-03-14 09:01:12.926500476 +0000 UTC m=+225.898782529" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.964471 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" podStartSLOduration=161.964441853 podStartE2EDuration="2m41.964441853s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.963973402 +0000 UTC m=+225.936255455" watchObservedRunningTime="2026-03-14 09:01:12.964441853 +0000 UTC m=+225.936723906" Mar 14 09:01:12 crc kubenswrapper[4869]: I0314 09:01:12.984393 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jwbdc" podStartSLOduration=8.984374907 podStartE2EDuration="8.984374907s" podCreationTimestamp="2026-03-14 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:12.983246838 +0000 UTC m=+225.955528901" watchObservedRunningTime="2026-03-14 09:01:12.984374907 +0000 UTC m=+225.956656960" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.006253 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.006599 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.506569795 +0000 UTC m=+226.478851868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.006926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.007256 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.507248293 +0000 UTC m=+226.479530416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.045695 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-brsgj" podStartSLOduration=162.045675222 podStartE2EDuration="2m42.045675222s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:13.045159489 +0000 UTC m=+226.017441562" watchObservedRunningTime="2026-03-14 09:01:13.045675222 +0000 UTC m=+226.017957285" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.109905 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.110058 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.610035755 +0000 UTC m=+226.582317818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.110097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.110462 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.610451026 +0000 UTC m=+226.582733089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.142840 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qhd6d" podStartSLOduration=162.142816802 podStartE2EDuration="2m42.142816802s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:13.079896695 +0000 UTC m=+226.052178758" watchObservedRunningTime="2026-03-14 09:01:13.142816802 +0000 UTC m=+226.115098875" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.176590 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7q5vd" podStartSLOduration=161.176569853 podStartE2EDuration="2m41.176569853s" podCreationTimestamp="2026-03-14 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:13.173046514 +0000 UTC m=+226.145328587" watchObservedRunningTime="2026-03-14 09:01:13.176569853 +0000 UTC m=+226.148851916" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.211117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.211499 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.711470064 +0000 UTC m=+226.683752117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.313952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.314330 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.814318187 +0000 UTC m=+226.786600240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.415496 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.416023 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:13.915983091 +0000 UTC m=+226.888265144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.516693 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.517383 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.017371768 +0000 UTC m=+226.989653821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.549699 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:13 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:13 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:13 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.549784 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.618359 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.618551 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.118485498 +0000 UTC m=+227.090767551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.619030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.619544 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.119503044 +0000 UTC m=+227.091803037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.731335 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.734191 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.233972811 +0000 UTC m=+227.206254864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.835448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.835833 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.33581717 +0000 UTC m=+227.308099233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.896702 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e245ff0-c737-4c36-aaad-f79c24030113" containerID="932577e865feba107d8a6f5f38eb8b43a074fe7e15cc5c0ff3190af7e9f2ce9c" exitCode=0 Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.897008 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" event={"ID":"7e245ff0-c737-4c36-aaad-f79c24030113","Type":"ContainerDied","Data":"932577e865feba107d8a6f5f38eb8b43a074fe7e15cc5c0ff3190af7e9f2ce9c"} Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.898364 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wd9hv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.898416 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" podUID="306303aa-346b-43a9-9797-f83308ea2b31" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.899072 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fjgpv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.899127 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.936647 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.936969 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.436914769 +0000 UTC m=+227.409196822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:13 crc kubenswrapper[4869]: I0314 09:01:13.937903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:13 crc kubenswrapper[4869]: E0314 09:01:13.949995 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.449977418 +0000 UTC m=+227.422259471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.039130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.039376 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.539296521 +0000 UTC m=+227.511578564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.039590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.040041 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.54002889 +0000 UTC m=+227.512310943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.061545 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51240: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.140903 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.141154 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.641121759 +0000 UTC m=+227.613403822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.141315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.141747 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.641732884 +0000 UTC m=+227.614014937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.155390 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51242: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.243110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.243656 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.743634694 +0000 UTC m=+227.715916747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.269112 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51254: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.344781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.345348 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.8453323 +0000 UTC m=+227.817614343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.363210 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51260: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.446182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.446657 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:14.946604483 +0000 UTC m=+227.918886536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.471774 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51266: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.545473 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:14 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:14 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:14 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.545969 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.548456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.548920 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.048905983 +0000 UTC m=+228.021188036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.649906 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.650190 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.150147336 +0000 UTC m=+228.122429389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.751390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.752068 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.252038596 +0000 UTC m=+228.224320649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.777572 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51272: no serving certificate available for the kubelet" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.852786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.853379 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.353356661 +0000 UTC m=+228.325638714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.862976 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.863706 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" containerID="cri-o://afbd9fb2e4d0ed11c8114a21bad751d2aed10068efb864b501af6b1e05a9cfd2" gracePeriod=30 Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.867490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.905884 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" event={"ID":"2cd7688a-9024-48c6-9094-3df0aaa49aa7","Type":"ContainerStarted","Data":"f59fec6f8408805decddff6eaeebe69ba07727ddaa366f6cde6bdd1a6add781b"} Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.954838 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.957056 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.957319 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" containerID="cri-o://dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255" gracePeriod=30 Mar 14 09:01:14 crc kubenswrapper[4869]: E0314 09:01:14.957097 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.457076288 +0000 UTC m=+228.429358341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:14 crc kubenswrapper[4869]: I0314 09:01:14.968563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.056733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.057014 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.556997597 +0000 UTC m=+228.529279650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.087272 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51286: no serving certificate available for the kubelet" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.158363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.158854 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.658838156 +0000 UTC m=+228.631120209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.259802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.260044 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.759994157 +0000 UTC m=+228.732276220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.260901 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.261289 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.761272889 +0000 UTC m=+228.733554942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.363331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.363614 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.86359918 +0000 UTC m=+228.835881233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.363662 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.363928 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.863921958 +0000 UTC m=+228.836204011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.464873 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.465206 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:15.965185031 +0000 UTC m=+228.937467084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.512367 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51290: no serving certificate available for the kubelet" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.543661 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:15 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:15 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:15 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.543720 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.566347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.566737 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.066725492 +0000 UTC m=+229.039007545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.651099 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.667083 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.667396 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.167380901 +0000 UTC m=+229.139662954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.768260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") pod \"7e245ff0-c737-4c36-aaad-f79c24030113\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.768311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume\") pod \"7e245ff0-c737-4c36-aaad-f79c24030113\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.768459 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsdbx\" (UniqueName: \"kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx\") pod \"7e245ff0-c737-4c36-aaad-f79c24030113\" (UID: \"7e245ff0-c737-4c36-aaad-f79c24030113\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.768705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.769013 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.269002063 +0000 UTC m=+229.241284116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.769789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e245ff0-c737-4c36-aaad-f79c24030113" (UID: "7e245ff0-c737-4c36-aaad-f79c24030113"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.781287 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx" (OuterVolumeSpecName: "kube-api-access-vsdbx") pod "7e245ff0-c737-4c36-aaad-f79c24030113" (UID: "7e245ff0-c737-4c36-aaad-f79c24030113"). InnerVolumeSpecName "kube-api-access-vsdbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.787830 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e245ff0-c737-4c36-aaad-f79c24030113" (UID: "7e245ff0-c737-4c36-aaad-f79c24030113"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.870782 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.871028 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e245ff0-c737-4c36-aaad-f79c24030113-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.871039 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e245ff0-c737-4c36-aaad-f79c24030113-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.871047 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsdbx\" (UniqueName: \"kubernetes.io/projected/7e245ff0-c737-4c36-aaad-f79c24030113-kube-api-access-vsdbx\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.871110 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.371095358 +0000 UTC m=+229.343377401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.930910 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.943783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" event={"ID":"7e245ff0-c737-4c36-aaad-f79c24030113","Type":"ContainerDied","Data":"e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163"} Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.943835 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e927466848bc479c2874547580d26e764deac10fdf3f087ea3d62107502a8163" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.943910 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.975364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngg9x\" (UniqueName: \"kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x\") pod \"84b93e6e-f3a8-4b32-beae-85a29e271c68\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.975459 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert\") pod \"84b93e6e-f3a8-4b32-beae-85a29e271c68\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.975568 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config\") pod \"84b93e6e-f3a8-4b32-beae-85a29e271c68\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.975622 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca\") pod \"84b93e6e-f3a8-4b32-beae-85a29e271c68\" (UID: \"84b93e6e-f3a8-4b32-beae-85a29e271c68\") " Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.976156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:15 crc kubenswrapper[4869]: E0314 09:01:15.976686 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.476663291 +0000 UTC m=+229.448945344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.978102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca" (OuterVolumeSpecName: "client-ca") pod "84b93e6e-f3a8-4b32-beae-85a29e271c68" (UID: "84b93e6e-f3a8-4b32-beae-85a29e271c68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:15 crc kubenswrapper[4869]: I0314 09:01:15.980789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config" (OuterVolumeSpecName: "config") pod "84b93e6e-f3a8-4b32-beae-85a29e271c68" (UID: "84b93e6e-f3a8-4b32-beae-85a29e271c68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.015152 4869 generic.go:334] "Generic (PLEG): container finished" podID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerID="afbd9fb2e4d0ed11c8114a21bad751d2aed10068efb864b501af6b1e05a9cfd2" exitCode=0 Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.015305 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" event={"ID":"ccacdee4-4ffc-4ddd-9a09-d80436e38e64","Type":"ContainerDied","Data":"afbd9fb2e4d0ed11c8114a21bad751d2aed10068efb864b501af6b1e05a9cfd2"} Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.026777 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x" (OuterVolumeSpecName: "kube-api-access-ngg9x") pod "84b93e6e-f3a8-4b32-beae-85a29e271c68" (UID: "84b93e6e-f3a8-4b32-beae-85a29e271c68"). InnerVolumeSpecName "kube-api-access-ngg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.030885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84b93e6e-f3a8-4b32-beae-85a29e271c68" (UID: "84b93e6e-f3a8-4b32-beae-85a29e271c68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.047488 4869 generic.go:334] "Generic (PLEG): container finished" podID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerID="dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255" exitCode=0 Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.047565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" event={"ID":"84b93e6e-f3a8-4b32-beae-85a29e271c68","Type":"ContainerDied","Data":"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255"} Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.047611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" event={"ID":"84b93e6e-f3a8-4b32-beae-85a29e271c68","Type":"ContainerDied","Data":"83d4b1b2a59c8771ca4d8d78ea706b9ef625a99b67fe31a10429d930f981fbc5"} Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.047629 4869 scope.go:117] "RemoveContainer" containerID="dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.047619 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.054495 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.077316 4869 scope.go:117] "RemoveContainer" containerID="dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.077815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert\") pod \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.078143 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config\") pod \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.078249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84wp6\" (UniqueName: \"kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6\") pod \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.078400 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.078529 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles\") pod \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.078574 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca\") pod \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\" (UID: \"ccacdee4-4ffc-4ddd-9a09-d80436e38e64\") " Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.079080 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.079101 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngg9x\" (UniqueName: \"kubernetes.io/projected/84b93e6e-f3a8-4b32-beae-85a29e271c68-kube-api-access-ngg9x\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.079111 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84b93e6e-f3a8-4b32-beae-85a29e271c68-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.079121 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84b93e6e-f3a8-4b32-beae-85a29e271c68-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.080175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca" (OuterVolumeSpecName: "client-ca") pod "ccacdee4-4ffc-4ddd-9a09-d80436e38e64" (UID: "ccacdee4-4ffc-4ddd-9a09-d80436e38e64"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.080261 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.580239423 +0000 UTC m=+229.552521476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.081220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config" (OuterVolumeSpecName: "config") pod "ccacdee4-4ffc-4ddd-9a09-d80436e38e64" (UID: "ccacdee4-4ffc-4ddd-9a09-d80436e38e64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.082142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ccacdee4-4ffc-4ddd-9a09-d80436e38e64" (UID: "ccacdee4-4ffc-4ddd-9a09-d80436e38e64"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.102819 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255\": container with ID starting with dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255 not found: ID does not exist" containerID="dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.102935 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255"} err="failed to get container status \"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255\": rpc error: code = NotFound desc = could not find container \"dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255\": container with ID starting with dd519f357c24f3c9ba7be6e2f8ed1426ca93bd6d64700b96e621f919bf7c5255 not found: ID does not exist" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.103083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6" (OuterVolumeSpecName: "kube-api-access-84wp6") pod "ccacdee4-4ffc-4ddd-9a09-d80436e38e64" (UID: "ccacdee4-4ffc-4ddd-9a09-d80436e38e64"). InnerVolumeSpecName "kube-api-access-84wp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.113593 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ccacdee4-4ffc-4ddd-9a09-d80436e38e64" (UID: "ccacdee4-4ffc-4ddd-9a09-d80436e38e64"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.137339 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.137840 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5m8cj"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.141054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5k48v" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186458 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186472 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186483 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84wp6\" (UniqueName: \"kubernetes.io/projected/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-kube-api-access-84wp6\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186493 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.186518 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccacdee4-4ffc-4ddd-9a09-d80436e38e64-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.186960 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.686936714 +0000 UTC m=+229.659218767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.229249 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51300: no serving certificate available for the kubelet" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.287765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.288050 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.787978502 +0000 UTC m=+229.760260565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.288333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.289015 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.789004428 +0000 UTC m=+229.761286661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.389975 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.390194 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.890151539 +0000 UTC m=+229.862433592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.390238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.390856 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.890846557 +0000 UTC m=+229.863128620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.491729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.491930 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.991898845 +0000 UTC m=+229.964180898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.492165 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.492499 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:16.99249161 +0000 UTC m=+229.964773663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.545478 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:16 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:16 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:16 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.545583 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.594193 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.594435 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.094368239 +0000 UTC m=+230.066650292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.594925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.595540 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.095500248 +0000 UTC m=+230.067782301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.696651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.696835 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.196807623 +0000 UTC m=+230.169089676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.697003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.697360 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.197351686 +0000 UTC m=+230.169633739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714400 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.714740 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.714774 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e245ff0-c737-4c36-aaad-f79c24030113" containerName="collect-profiles" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714781 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e245ff0-c737-4c36-aaad-f79c24030113" containerName="collect-profiles" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.714791 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714798 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714915 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e245ff0-c737-4c36-aaad-f79c24030113" containerName="collect-profiles" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714932 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" containerName="controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.714946 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" containerName="route-controller-manager" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.715828 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.718077 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.736658 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.798650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.798821 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.298798805 +0000 UTC m=+230.271080868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.799036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.799075 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.799112 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.799160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj949\" (UniqueName: \"kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.799649 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.299638135 +0000 UTC m=+230.271920238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.805418 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.806222 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.811530 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.815849 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.819223 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.900575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.900721 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.400702775 +0000 UTC m=+230.372984818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.900989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj949\" (UniqueName: \"kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901134 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.901869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: E0314 09:01:16.902248 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.402238174 +0000 UTC m=+230.374520417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.902330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.903372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.904595 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.906675 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.920707 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:01:16 crc kubenswrapper[4869]: I0314 09:01:16.923361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj949\" (UniqueName: \"kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949\") pod \"community-operators-94926\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.002409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.002863 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.502827511 +0000 UTC m=+230.475109574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003664 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2t6\" (UniqueName: \"kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003792 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.003823 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.004261 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.504245696 +0000 UTC m=+230.476527739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.004454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.022292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.034607 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.034664 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.045254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94926" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.058556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.060361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-729jx" event={"ID":"ccacdee4-4ffc-4ddd-9a09-d80436e38e64","Type":"ContainerDied","Data":"578740a07eb12321cbd001114c52ce7737433581694e5f0aee68e2f356e6b6a2"} Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.060453 4869 scope.go:117] "RemoveContainer" containerID="afbd9fb2e4d0ed11c8114a21bad751d2aed10068efb864b501af6b1e05a9cfd2" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.069148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" event={"ID":"2cd7688a-9024-48c6-9094-3df0aaa49aa7","Type":"ContainerStarted","Data":"2c93c7f841e6b915f312a9e7786b9edae15f9b1f3e64398e0b65943d3695265f"} Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.116269 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.116953 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.117406 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2t6\" (UniqueName: \"kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.117462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.118058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.118145 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.618120288 +0000 UTC m=+230.590402341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.119037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.120025 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.120264 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.121091 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.124901 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.125837 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-729jx"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.129024 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.155454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2t6\" (UniqueName: \"kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6\") pod \"certified-operators-6cz2t\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.219806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cjb9\" (UniqueName: \"kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.219873 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.219960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.219992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.220368 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.720355986 +0000 UTC m=+230.692638039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.253094 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.295683 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.308031 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.313090 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.313429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.317794 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.317988 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.318101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.318840 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.322859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.323335 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.323397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.323433 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjb9\" (UniqueName: \"kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.323996 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.823956859 +0000 UTC m=+230.796238912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.324859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.325332 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.333894 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.339379 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.342868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.347889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.348360 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.348566 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.348769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.349612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.349903 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.354985 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.361271 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.373857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cjb9\" (UniqueName: \"kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9\") pod \"community-operators-9xwqm\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.381930 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.383347 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.383466 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.415726 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.416391 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.419616 4869 patch_prober.go:28] interesting pod/console-f9d7485db-plgzk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.419689 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-plgzk" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.424835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.424868 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.424930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.424955 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425013 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmjr\" (UniqueName: \"kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425112 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4b49\" (UniqueName: \"kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsq2f\" (UniqueName: \"kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.425290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.426369 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:17.926351201 +0000 UTC m=+230.898633244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.441030 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.485325 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.485404 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.485533 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.485563 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.496977 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.500059 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dzfrm" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.521711 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.521795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.526548 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsq2f\" (UniqueName: \"kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4b49\" (UniqueName: \"kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngmjr\" (UniqueName: \"kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.527482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.528778 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.530767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.531762 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.530776 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:18.030725124 +0000 UTC m=+231.003007177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.532720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.533624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.533785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.536354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.542035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.542493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.549320 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.549396 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.551698 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.558853 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:17 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:17 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:17 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.558935 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.560860 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngmjr\" (UniqueName: \"kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr\") pod \"route-controller-manager-6cb7587b4-94w4b\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.560963 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4b49\" (UniqueName: \"kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49\") pod \"controller-manager-5756d879cc-lthf9\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.564307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.566575 4869 ???:1] "http: TLS handshake error from 192.168.126.11:51314: no serving certificate available for the kubelet" Mar 14 09:01:17 crc kubenswrapper[4869]: W0314 09:01:17.573535 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeb7bc69c_b53b_4faa_8e75_a409f36af034.slice/crio-b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45 WatchSource:0}: Error finding container b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45: Status 404 returned error can't find the container with id b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45 Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.576960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsq2f\" (UniqueName: \"kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f\") pod \"certified-operators-klzzb\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.629620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.630175 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:18.130154092 +0000 UTC m=+231.102436145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.691095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nbs46" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.693492 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.700318 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.722658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.732355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.734039 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:18.234016001 +0000 UTC m=+231.206298054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.747578 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.765911 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b93e6e-f3a8-4b32-beae-85a29e271c68" path="/var/lib/kubelet/pods/84b93e6e-f3a8-4b32-beae-85a29e271c68/volumes" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.766934 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccacdee4-4ffc-4ddd-9a09-d80436e38e64" path="/var/lib/kubelet/pods/ccacdee4-4ffc-4ddd-9a09-d80436e38e64/volumes" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.769203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7slw5" Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.833879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.839106 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-14 09:01:18.33907243 +0000 UTC m=+231.311354643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fdrdm" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.922095 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.937170 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:17 crc kubenswrapper[4869]: E0314 09:01:17.937690 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-14 09:01:18.437666936 +0000 UTC m=+231.409948989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.954099 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-14T09:01:17.747601734Z","Handler":null,"Name":""} Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.965530 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.965586 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 14 09:01:17 crc kubenswrapper[4869]: I0314 09:01:17.974767 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.033360 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wd9hv" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.045629 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9njzd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]log ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]etcd ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/max-in-flight-filter ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 14 09:01:18 crc kubenswrapper[4869]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 14 09:01:18 crc kubenswrapper[4869]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 14 09:01:18 crc kubenswrapper[4869]: [-]poststarthook/project.openshift.io-projectcache failed: reason withheld Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startinformers ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 14 09:01:18 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 14 09:01:18 crc kubenswrapper[4869]: livez check failed Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.045841 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" podUID="8518e88a-aacc-484f-b82d-d55106c5bdcf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.047238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.049247 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.050972 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.051141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.055679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.075339 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.116656 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.119587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerStarted","Data":"7fb07d1341ad1dfdc0327b306bff847913b9c74ca28aa6bc8806b4b887a34f9f"} Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.124636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eb7bc69c-b53b-4faa-8e75-a409f36af034","Type":"ContainerStarted","Data":"b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45"} Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.126212 4869 generic.go:334] "Generic (PLEG): container finished" podID="8466d496-2ca4-49f2-96ff-75386b047783" containerID="a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83" exitCode=0 Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.126280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerDied","Data":"a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83"} Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.126297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerStarted","Data":"34debbefbc30dcbcf8242579dd7b8f5fb6f598706bc3bcc3982e803459afcb17"} Mar 14 09:01:18 crc kubenswrapper[4869]: W0314 09:01:18.129228 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bb7315d_59e6_4f41_a983_700a083a75af.slice/crio-6d20b6d4fb035727306883a419030fe63fdf3c43940200a805a3d1f7d525c05a WatchSource:0}: Error finding container 6d20b6d4fb035727306883a419030fe63fdf3c43940200a805a3d1f7d525c05a: Status 404 returned error can't find the container with id 6d20b6d4fb035727306883a419030fe63fdf3c43940200a805a3d1f7d525c05a Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.138575 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" event={"ID":"2cd7688a-9024-48c6-9094-3df0aaa49aa7","Type":"ContainerStarted","Data":"e388481983f6738bdb6ea42bbcb95f8c657c56fa3194115cdfdf7dce93636155"} Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.148650 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-p6mcj" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.165700 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.165765 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.254911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.255678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.273135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fdrdm\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.361744 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.366365 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.366808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.366968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.367377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: W0314 09:01:18.374969 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ed271a_fef0_4210_9714_2bee1a22aef4.slice/crio-b9720f67edb67d5929fddfbbc16d230139b61177a5549fbf7cc0d697f76cb1cc WatchSource:0}: Error finding container b9720f67edb67d5929fddfbbc16d230139b61177a5549fbf7cc0d697f76cb1cc: Status 404 returned error can't find the container with id b9720f67edb67d5929fddfbbc16d230139b61177a5549fbf7cc0d697f76cb1cc Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.384326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.393373 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.412328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.424055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.498363 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.547763 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:18 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:18 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:18 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.547848 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.582628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:01:18 crc kubenswrapper[4869]: W0314 09:01:18.636210 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ae3c37_af29_4957_9648_52c28558591e.slice/crio-8fdcacfc466131ce5fe25e6866d191382ac345315168e196737da93a3ea85d66 WatchSource:0}: Error finding container 8fdcacfc466131ce5fe25e6866d191382ac345315168e196737da93a3ea85d66: Status 404 returned error can't find the container with id 8fdcacfc466131ce5fe25e6866d191382ac345315168e196737da93a3ea85d66 Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.735406 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:01:18 crc kubenswrapper[4869]: W0314 09:01:18.740821 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91339654_6d93_49bd_b48a_d2cf1dde09aa.slice/crio-670b368dd37915342cbc9a12922bf7f61dfbc1752e7ba5647141d3fa7ddd3c69 WatchSource:0}: Error finding container 670b368dd37915342cbc9a12922bf7f61dfbc1752e7ba5647141d3fa7ddd3c69: Status 404 returned error can't find the container with id 670b368dd37915342cbc9a12922bf7f61dfbc1752e7ba5647141d3fa7ddd3c69 Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.902682 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.924317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.930791 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.932415 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.969214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 14 09:01:18 crc kubenswrapper[4869]: W0314 09:01:18.979063 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84864ccc_d4e0_48f9_812e_c47e1ae77387.slice/crio-21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f WatchSource:0}: Error finding container 21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f: Status 404 returned error can't find the container with id 21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.998408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.998491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bgfx\" (UniqueName: \"kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:18 crc kubenswrapper[4869]: I0314 09:01:18.998599 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.100498 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.100594 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.100637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bgfx\" (UniqueName: \"kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.101900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.102158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.144411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bgfx\" (UniqueName: \"kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx\") pod \"redhat-marketplace-9kv6g\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.158928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerStarted","Data":"6d20b6d4fb035727306883a419030fe63fdf3c43940200a805a3d1f7d525c05a"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.161067 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerStarted","Data":"8fdcacfc466131ce5fe25e6866d191382ac345315168e196737da93a3ea85d66"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.163386 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"84864ccc-d4e0-48f9-812e-c47e1ae77387","Type":"ContainerStarted","Data":"21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.165641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" event={"ID":"55ed271a-fef0-4210-9714-2bee1a22aef4","Type":"ContainerStarted","Data":"b9720f67edb67d5929fddfbbc16d230139b61177a5549fbf7cc0d697f76cb1cc"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.167159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" event={"ID":"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606","Type":"ContainerStarted","Data":"2fb726089277139c58c9a1055cbd0d44b443f7f6a76b6d1ec448e206b8245167"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.169616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" event={"ID":"91339654-6d93-49bd-b48a-d2cf1dde09aa","Type":"ContainerStarted","Data":"670b368dd37915342cbc9a12922bf7f61dfbc1752e7ba5647141d3fa7ddd3c69"} Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.256107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.308098 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.317362 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.317606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.410586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.411219 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgmx\" (UniqueName: \"kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.411348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.513328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.513389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.513449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgmx\" (UniqueName: \"kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.514843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.517321 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.544066 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:19 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:19 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:19 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.544137 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.562643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgmx\" (UniqueName: \"kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx\") pod \"redhat-marketplace-sxrzk\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.632449 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:01:19 crc kubenswrapper[4869]: W0314 09:01:19.653630 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40c9b0bd_b30e_470c_bf30_bd55c35e2e84.slice/crio-04a797ef74df3d2682e1c4a2f9c00b8970dfa10018ddd3a48b41d041e0258fb7 WatchSource:0}: Error finding container 04a797ef74df3d2682e1c4a2f9c00b8970dfa10018ddd3a48b41d041e0258fb7: Status 404 returned error can't find the container with id 04a797ef74df3d2682e1c4a2f9c00b8970dfa10018ddd3a48b41d041e0258fb7 Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.719402 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.809250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.907234 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.908460 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.911339 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:01:19 crc kubenswrapper[4869]: I0314 09:01:19.925225 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.031199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.031661 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fph\" (UniqueName: \"kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.031685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.133625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.133670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fph\" (UniqueName: \"kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.133689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.135010 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.141264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.189705 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fph\" (UniqueName: \"kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph\") pod \"redhat-operators-wt8jx\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.202233 4869 generic.go:334] "Generic (PLEG): container finished" podID="eb7bc69c-b53b-4faa-8e75-a409f36af034" containerID="a5cb6438520c22513bd4b04d878e60fdc0da9047fc37fab9ec4ad98eac51ccfd" exitCode=0 Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.202321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eb7bc69c-b53b-4faa-8e75-a409f36af034","Type":"ContainerDied","Data":"a5cb6438520c22513bd4b04d878e60fdc0da9047fc37fab9ec4ad98eac51ccfd"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.204082 4869 generic.go:334] "Generic (PLEG): container finished" podID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerID="9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8" exitCode=0 Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.204134 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerDied","Data":"9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.204153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerStarted","Data":"04a797ef74df3d2682e1c4a2f9c00b8970dfa10018ddd3a48b41d041e0258fb7"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.229746 4869 ???:1] "http: TLS handshake error from 192.168.126.11:41608: no serving certificate available for the kubelet" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.232947 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.240493 4869 generic.go:334] "Generic (PLEG): container finished" podID="0bb7315d-59e6-4f41-a983-700a083a75af" containerID="bd315f815ebbc994d8577dea670b68bbf1cc964e9f6c5bdcbed08f35454a5155" exitCode=0 Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.240636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerDied","Data":"bd315f815ebbc994d8577dea670b68bbf1cc964e9f6c5bdcbed08f35454a5155"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.250486 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1ae3c37-af29-4957-9648-52c28558591e" containerID="7b3b16e6bd5d757fc8487a7b9b407963c46d20c8aec95e0b9e21f6b142ceabb0" exitCode=0 Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.250639 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerDied","Data":"7b3b16e6bd5d757fc8487a7b9b407963c46d20c8aec95e0b9e21f6b142ceabb0"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.280148 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerID="aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c" exitCode=0 Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.280387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerDied","Data":"aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.304965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" event={"ID":"55ed271a-fef0-4210-9714-2bee1a22aef4","Type":"ContainerStarted","Data":"ffa3e54278d7c9d1feb98903845b6ae9d61b79f3a29f8c10aa0cbeb3fd6ea0c3"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.312711 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.314399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" event={"ID":"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606","Type":"ContainerStarted","Data":"9eaa68739ccf99a19a8c33a56129f28b5c5d322f8b2994d15517999292b6087a"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.315797 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.339144 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.340776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.348172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.374022 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.387372 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" event={"ID":"91339654-6d93-49bd-b48a-d2cf1dde09aa","Type":"ContainerStarted","Data":"14680713befe04a8754a3a341e5ac9e93507206312af9b4973f5bf11e4fba9e5"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.387416 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.393835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"84864ccc-d4e0-48f9-812e-c47e1ae77387","Type":"ContainerStarted","Data":"03689d598bcb3a6bd9c54f67e3e1bd5ecbee450d3e141017fa386c44ac57f81c"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.406046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" event={"ID":"2cd7688a-9024-48c6-9094-3df0aaa49aa7","Type":"ContainerStarted","Data":"2671c55f4c6f5d7bf77d5b727e2c5cfaed3dd4ed37bd026e7a9904fc4ab9830f"} Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.454711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.454846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqlh\" (UniqueName: \"kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.454940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.461219 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" podStartSLOduration=5.460965763 podStartE2EDuration="5.460965763s" podCreationTimestamp="2026-03-14 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:20.453999077 +0000 UTC m=+233.426281150" watchObservedRunningTime="2026-03-14 09:01:20.460965763 +0000 UTC m=+233.433247816" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.474806 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.503947 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" podStartSLOduration=5.503925597 podStartE2EDuration="5.503925597s" podCreationTimestamp="2026-03-14 09:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:20.497951766 +0000 UTC m=+233.470233819" watchObservedRunningTime="2026-03-14 09:01:20.503925597 +0000 UTC m=+233.476207650" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.548372 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:20 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:20 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:20 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.548462 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.556708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqlh\" (UniqueName: \"kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.556814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.556917 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.557793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.557876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.605200 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xwb49" podStartSLOduration=16.605183189999998 podStartE2EDuration="16.60518319s" podCreationTimestamp="2026-03-14 09:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:20.602193455 +0000 UTC m=+233.574475508" watchObservedRunningTime="2026-03-14 09:01:20.60518319 +0000 UTC m=+233.577465243" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.609729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqlh\" (UniqueName: \"kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh\") pod \"redhat-operators-sfjjg\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.637731 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" podStartSLOduration=169.637701551 podStartE2EDuration="2m49.637701551s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:01:20.63647761 +0000 UTC m=+233.608759683" watchObservedRunningTime="2026-03-14 09:01:20.637701551 +0000 UTC m=+233.609983624" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.680870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:01:20 crc kubenswrapper[4869]: I0314 09:01:20.696400 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:21 crc kubenswrapper[4869]: I0314 09:01:21.437765 4869 generic.go:334] "Generic (PLEG): container finished" podID="84864ccc-d4e0-48f9-812e-c47e1ae77387" containerID="03689d598bcb3a6bd9c54f67e3e1bd5ecbee450d3e141017fa386c44ac57f81c" exitCode=0 Mar 14 09:01:21 crc kubenswrapper[4869]: I0314 09:01:21.437911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"84864ccc-d4e0-48f9-812e-c47e1ae77387","Type":"ContainerDied","Data":"03689d598bcb3a6bd9c54f67e3e1bd5ecbee450d3e141017fa386c44ac57f81c"} Mar 14 09:01:21 crc kubenswrapper[4869]: I0314 09:01:21.543697 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:21 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:21 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:21 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:21 crc kubenswrapper[4869]: I0314 09:01:21.543782 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:22 crc kubenswrapper[4869]: I0314 09:01:22.045858 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:22 crc kubenswrapper[4869]: I0314 09:01:22.054532 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9njzd" Mar 14 09:01:22 crc kubenswrapper[4869]: I0314 09:01:22.546826 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:22 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:22 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:22 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:22 crc kubenswrapper[4869]: I0314 09:01:22.547321 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:23 crc kubenswrapper[4869]: I0314 09:01:23.026423 4869 ???:1] "http: TLS handshake error from 192.168.126.11:41616: no serving certificate available for the kubelet" Mar 14 09:01:23 crc kubenswrapper[4869]: I0314 09:01:23.411671 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jwbdc" Mar 14 09:01:23 crc kubenswrapper[4869]: I0314 09:01:23.542550 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:23 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:23 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:23 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:23 crc kubenswrapper[4869]: I0314 09:01:23.542606 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:24 crc kubenswrapper[4869]: I0314 09:01:24.542987 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:24 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:24 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:24 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:24 crc kubenswrapper[4869]: I0314 09:01:24.543068 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:25 crc kubenswrapper[4869]: I0314 09:01:25.390953 4869 ???:1] "http: TLS handshake error from 192.168.126.11:41618: no serving certificate available for the kubelet" Mar 14 09:01:25 crc kubenswrapper[4869]: I0314 09:01:25.546056 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:25 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:25 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:25 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:25 crc kubenswrapper[4869]: I0314 09:01:25.546169 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.112726 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.170812 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access\") pod \"84864ccc-d4e0-48f9-812e-c47e1ae77387\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.171009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir\") pod \"84864ccc-d4e0-48f9-812e-c47e1ae77387\" (UID: \"84864ccc-d4e0-48f9-812e-c47e1ae77387\") " Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.171184 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84864ccc-d4e0-48f9-812e-c47e1ae77387" (UID: "84864ccc-d4e0-48f9-812e-c47e1ae77387"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.171469 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84864ccc-d4e0-48f9-812e-c47e1ae77387-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.191995 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84864ccc-d4e0-48f9-812e-c47e1ae77387" (UID: "84864ccc-d4e0-48f9-812e-c47e1ae77387"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.273096 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84864ccc-d4e0-48f9-812e-c47e1ae77387-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.480193 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"84864ccc-d4e0-48f9-812e-c47e1ae77387","Type":"ContainerDied","Data":"21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f"} Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.480274 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a59fc5aae2a517244e00952d1a6345cde01ea08e33cd1acb5926d536397f0f" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.480419 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.542166 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:26 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:26 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:26 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:26 crc kubenswrapper[4869]: I0314 09:01:26.542243 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.416284 4869 patch_prober.go:28] interesting pod/console-f9d7485db-plgzk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.417000 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-plgzk" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.485096 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.485155 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.485184 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.485221 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.544416 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:27 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:27 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:27 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:27 crc kubenswrapper[4869]: I0314 09:01:27.544570 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:28 crc kubenswrapper[4869]: I0314 09:01:28.545009 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:28 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:28 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:28 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:28 crc kubenswrapper[4869]: I0314 09:01:28.545812 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:29 crc kubenswrapper[4869]: I0314 09:01:29.543895 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:29 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:29 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:29 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:29 crc kubenswrapper[4869]: I0314 09:01:29.544046 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.042592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.045623 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.063492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b5b025a-d78e-4728-b492-19846b3ad862-metrics-certs\") pod \"network-metrics-daemon-n77vq\" (UID: \"0b5b025a-d78e-4728-b492-19846b3ad862\") " pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.219459 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.228101 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n77vq" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.514321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerStarted","Data":"cc6ea94056a27b82b9b3d51862876106347bc8717eecf8f206b2442d9766747d"} Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.545776 4869 patch_prober.go:28] interesting pod/router-default-5444994796-2pnmj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 14 09:01:30 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Mar 14 09:01:30 crc kubenswrapper[4869]: [+]process-running ok Mar 14 09:01:30 crc kubenswrapper[4869]: healthz check failed Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.545862 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2pnmj" podUID="7d051a1c-0150-43fd-b2dd-45ba5f654021" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 14 09:01:30 crc kubenswrapper[4869]: I0314 09:01:30.922558 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.058121 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir\") pod \"eb7bc69c-b53b-4faa-8e75-a409f36af034\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.058222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access\") pod \"eb7bc69c-b53b-4faa-8e75-a409f36af034\" (UID: \"eb7bc69c-b53b-4faa-8e75-a409f36af034\") " Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.059622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eb7bc69c-b53b-4faa-8e75-a409f36af034" (UID: "eb7bc69c-b53b-4faa-8e75-a409f36af034"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.068836 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eb7bc69c-b53b-4faa-8e75-a409f36af034" (UID: "eb7bc69c-b53b-4faa-8e75-a409f36af034"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.159763 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb7bc69c-b53b-4faa-8e75-a409f36af034-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.159798 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb7bc69c-b53b-4faa-8e75-a409f36af034-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.521958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eb7bc69c-b53b-4faa-8e75-a409f36af034","Type":"ContainerDied","Data":"b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45"} Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.522066 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5daf892ac6be1d395195b66f1f3468ba649a2a1e8590ef8119620a2943a0e45" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.522192 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.546472 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:31 crc kubenswrapper[4869]: I0314 09:01:31.550256 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2pnmj" Mar 14 09:01:34 crc kubenswrapper[4869]: I0314 09:01:34.255675 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:34 crc kubenswrapper[4869]: I0314 09:01:34.257985 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerName="controller-manager" containerID="cri-o://ffa3e54278d7c9d1feb98903845b6ae9d61b79f3a29f8c10aa0cbeb3fd6ea0c3" gracePeriod=30 Mar 14 09:01:34 crc kubenswrapper[4869]: I0314 09:01:34.282639 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:34 crc kubenswrapper[4869]: I0314 09:01:34.282991 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerName="route-controller-manager" containerID="cri-o://9eaa68739ccf99a19a8c33a56129f28b5c5d322f8b2994d15517999292b6087a" gracePeriod=30 Mar 14 09:01:35 crc kubenswrapper[4869]: I0314 09:01:35.566392 4869 generic.go:334] "Generic (PLEG): container finished" podID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerID="ffa3e54278d7c9d1feb98903845b6ae9d61b79f3a29f8c10aa0cbeb3fd6ea0c3" exitCode=0 Mar 14 09:01:35 crc kubenswrapper[4869]: I0314 09:01:35.566488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" event={"ID":"55ed271a-fef0-4210-9714-2bee1a22aef4","Type":"ContainerDied","Data":"ffa3e54278d7c9d1feb98903845b6ae9d61b79f3a29f8c10aa0cbeb3fd6ea0c3"} Mar 14 09:01:36 crc kubenswrapper[4869]: I0314 09:01:36.527138 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:01:36 crc kubenswrapper[4869]: I0314 09:01:36.574676 4869 generic.go:334] "Generic (PLEG): container finished" podID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerID="9eaa68739ccf99a19a8c33a56129f28b5c5d322f8b2994d15517999292b6087a" exitCode=0 Mar 14 09:01:36 crc kubenswrapper[4869]: I0314 09:01:36.574738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" event={"ID":"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606","Type":"ContainerDied","Data":"9eaa68739ccf99a19a8c33a56129f28b5c5d322f8b2994d15517999292b6087a"} Mar 14 09:01:36 crc kubenswrapper[4869]: E0314 09:01:36.818465 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 14 09:01:36 crc kubenswrapper[4869]: E0314 09:01:36.819113 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 14 09:01:36 crc kubenswrapper[4869]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 14 09:01:36 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xvxjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29557980-9t5kk_openshift-infra(db3ce98b-d0f8-4fda-84cb-390a11eb508e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 14 09:01:36 crc kubenswrapper[4869]: > logger="UnhandledError" Mar 14 09:01:36 crc kubenswrapper[4869]: E0314 09:01:36.820565 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.421298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.425380 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.485366 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.485453 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.485369 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.486000 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.486042 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.486798 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cfd3bd0436a655abe42ee1dd47205ba9463708b022564b1b77a10e8250de197b"} pod="openshift-console/downloads-7954f5f757-zgn62" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.486853 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" containerID="cri-o://cfd3bd0436a655abe42ee1dd47205ba9463708b022564b1b77a10e8250de197b" gracePeriod=2 Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.486977 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:37 crc kubenswrapper[4869]: I0314 09:01:37.487073 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:37 crc kubenswrapper[4869]: E0314 09:01:37.582505 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.400351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.589186 4869 generic.go:334] "Generic (PLEG): container finished" podID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerID="cfd3bd0436a655abe42ee1dd47205ba9463708b022564b1b77a10e8250de197b" exitCode=0 Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.589250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zgn62" event={"ID":"33333edb-d3b9-49eb-acc4-bc014c8da396","Type":"ContainerDied","Data":"cfd3bd0436a655abe42ee1dd47205ba9463708b022564b1b77a10e8250de197b"} Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.694733 4869 patch_prober.go:28] interesting pod/controller-manager-5756d879cc-lthf9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.694832 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.701846 4869 patch_prober.go:28] interesting pod/route-controller-manager-6cb7587b4-94w4b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:01:38 crc kubenswrapper[4869]: I0314 09:01:38.702087 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:01:39 crc kubenswrapper[4869]: I0314 09:01:39.605099 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:01:39 crc kubenswrapper[4869]: I0314 09:01:39.605558 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.556654 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.562870 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.608683 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:01:43 crc kubenswrapper[4869]: E0314 09:01:43.609174 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerName="controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609232 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerName="controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: E0314 09:01:43.609260 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84864ccc-d4e0-48f9-812e-c47e1ae77387" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609271 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="84864ccc-d4e0-48f9-812e-c47e1ae77387" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: E0314 09:01:43.609294 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609304 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: E0314 09:01:43.609315 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7bc69c-b53b-4faa-8e75-a409f36af034" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609323 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7bc69c-b53b-4faa-8e75-a409f36af034" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609447 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" containerName="controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609465 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" containerName="route-controller-manager" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609475 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7bc69c-b53b-4faa-8e75-a409f36af034" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.609490 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="84864ccc-d4e0-48f9-812e-c47e1ae77387" containerName="pruner" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.610256 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.617231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.627661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" event={"ID":"55ed271a-fef0-4210-9714-2bee1a22aef4","Type":"ContainerDied","Data":"b9720f67edb67d5929fddfbbc16d230139b61177a5549fbf7cc0d697f76cb1cc"} Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.628049 4869 scope.go:117] "RemoveContainer" containerID="ffa3e54278d7c9d1feb98903845b6ae9d61b79f3a29f8c10aa0cbeb3fd6ea0c3" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.627699 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5756d879cc-lthf9" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.632323 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.632565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b" event={"ID":"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606","Type":"ContainerDied","Data":"2fb726089277139c58c9a1055cbd0d44b443f7f6a76b6d1ec448e206b8245167"} Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.635163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerStarted","Data":"a66c14ca01eccfdb511f4d2ac077a8e1d2afbbc9941be8b118f4665d83136a76"} Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.685773 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert\") pod \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.685870 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles\") pod \"55ed271a-fef0-4210-9714-2bee1a22aef4\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.685912 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config\") pod \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.685933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca\") pod \"55ed271a-fef0-4210-9714-2bee1a22aef4\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.685952 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4b49\" (UniqueName: \"kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49\") pod \"55ed271a-fef0-4210-9714-2bee1a22aef4\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.686017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert\") pod \"55ed271a-fef0-4210-9714-2bee1a22aef4\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.686039 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config\") pod \"55ed271a-fef0-4210-9714-2bee1a22aef4\" (UID: \"55ed271a-fef0-4210-9714-2bee1a22aef4\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.686054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngmjr\" (UniqueName: \"kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr\") pod \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.686088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca\") pod \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\" (UID: \"edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606\") " Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.687790 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config" (OuterVolumeSpecName: "config") pod "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" (UID: "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.687817 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca" (OuterVolumeSpecName: "client-ca") pod "55ed271a-fef0-4210-9714-2bee1a22aef4" (UID: "55ed271a-fef0-4210-9714-2bee1a22aef4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.687948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55ed271a-fef0-4210-9714-2bee1a22aef4" (UID: "55ed271a-fef0-4210-9714-2bee1a22aef4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.687947 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config" (OuterVolumeSpecName: "config") pod "55ed271a-fef0-4210-9714-2bee1a22aef4" (UID: "55ed271a-fef0-4210-9714-2bee1a22aef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.688725 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca" (OuterVolumeSpecName: "client-ca") pod "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" (UID: "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.693125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49" (OuterVolumeSpecName: "kube-api-access-m4b49") pod "55ed271a-fef0-4210-9714-2bee1a22aef4" (UID: "55ed271a-fef0-4210-9714-2bee1a22aef4"). InnerVolumeSpecName "kube-api-access-m4b49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.694764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55ed271a-fef0-4210-9714-2bee1a22aef4" (UID: "55ed271a-fef0-4210-9714-2bee1a22aef4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.694787 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" (UID: "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.695110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr" (OuterVolumeSpecName: "kube-api-access-ngmjr") pod "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" (UID: "edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606"). InnerVolumeSpecName "kube-api-access-ngmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789301 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhwnq\" (UniqueName: \"kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789554 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789576 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789591 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789603 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789616 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789629 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4b49\" (UniqueName: \"kubernetes.io/projected/55ed271a-fef0-4210-9714-2bee1a22aef4-kube-api-access-m4b49\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789640 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55ed271a-fef0-4210-9714-2bee1a22aef4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789652 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55ed271a-fef0-4210-9714-2bee1a22aef4-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.789663 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngmjr\" (UniqueName: \"kubernetes.io/projected/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606-kube-api-access-ngmjr\") on node \"crc\" DevicePath \"\"" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.891231 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.891320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhwnq\" (UniqueName: \"kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.891381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.891458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.892911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.893338 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.896578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.910200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhwnq\" (UniqueName: \"kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq\") pod \"route-controller-manager-7cfb685b4b-m46xf\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.935374 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.947831 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.952055 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5756d879cc-lthf9"] Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.963156 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:43 crc kubenswrapper[4869]: I0314 09:01:43.968715 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7587b4-94w4b"] Mar 14 09:01:45 crc kubenswrapper[4869]: I0314 09:01:45.714089 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ed271a-fef0-4210-9714-2bee1a22aef4" path="/var/lib/kubelet/pods/55ed271a-fef0-4210-9714-2bee1a22aef4/volumes" Mar 14 09:01:45 crc kubenswrapper[4869]: I0314 09:01:45.716422 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606" path="/var/lib/kubelet/pods/edc5d5eb-6a8c-4f81-b6b3-c0db0b5e9606/volumes" Mar 14 09:01:45 crc kubenswrapper[4869]: I0314 09:01:45.898837 4869 ???:1] "http: TLS handshake error from 192.168.126.11:37012: no serving certificate available for the kubelet" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.322899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.326804 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.334192 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.334322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.334219 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.334531 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.335224 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.336872 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.345690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.350016 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.435572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lfd\" (UniqueName: \"kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.436007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.436153 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.436257 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.436366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.538458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.538581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.538630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.538678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lfd\" (UniqueName: \"kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.538706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.540442 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.540574 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.544394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.553406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.561994 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lfd\" (UniqueName: \"kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd\") pod \"controller-manager-868c95f6b9-442cx\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:46 crc kubenswrapper[4869]: I0314 09:01:46.654726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:01:47 crc kubenswrapper[4869]: I0314 09:01:47.487430 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:47 crc kubenswrapper[4869]: I0314 09:01:47.487595 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:47 crc kubenswrapper[4869]: I0314 09:01:47.778439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ldfhw" Mar 14 09:01:49 crc kubenswrapper[4869]: I0314 09:01:49.044548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.403406 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.405812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.410279 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.411014 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.417929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.516870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.517011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.618564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.618635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.618932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.647000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.740126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:01:51 crc kubenswrapper[4869]: I0314 09:01:51.751543 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:01:51 crc kubenswrapper[4869]: E0314 09:01:51.776365 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 09:01:51 crc kubenswrapper[4869]: E0314 09:01:51.776671 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj949,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-94926_openshift-marketplace(8466d496-2ca4-49f2-96ff-75386b047783): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:51 crc kubenswrapper[4869]: E0314 09:01:51.777957 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-94926" podUID="8466d496-2ca4-49f2-96ff-75386b047783" Mar 14 09:01:54 crc kubenswrapper[4869]: I0314 09:01:54.328379 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:01:54 crc kubenswrapper[4869]: I0314 09:01:54.409704 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.507471 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.507757 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cjb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9xwqm_openshift-marketplace(0bb7315d-59e6-4f41-a983-700a083a75af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.509600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9xwqm" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.601935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.603418 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.607724 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.607979 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bgfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9kv6g_openshift-marketplace(40c9b0bd-b30e-470c-bf30-bd55c35e2e84): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:56 crc kubenswrapper[4869]: E0314 09:01:56.610468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9kv6g" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.624772 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.708456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.708876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.708935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.796977 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n77vq"] Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.810949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.811038 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.811110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.811148 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.811261 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.844263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access\") pod \"installer-9-crc\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:56 crc kubenswrapper[4869]: I0314 09:01:56.989004 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:01:57 crc kubenswrapper[4869]: I0314 09:01:57.498788 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:01:57 crc kubenswrapper[4869]: I0314 09:01:57.498863 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.563477 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-94926" podUID="8466d496-2ca4-49f2-96ff-75386b047783" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.576837 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9kv6g" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.577028 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9xwqm" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.581435 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.581647 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsq2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-klzzb_openshift-marketplace(a1ae3c37-af29-4957-9648-52c28558591e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.583257 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-klzzb" podUID="a1ae3c37-af29-4957-9648-52c28558591e" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.585431 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.585552 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kc2t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6cz2t_openshift-marketplace(3b454c3f-60ab-4a89-ab1e-e1e15cf08b66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.586982 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6cz2t" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.635279 4869 scope.go:117] "RemoveContainer" containerID="9eaa68739ccf99a19a8c33a56129f28b5c5d322f8b2994d15517999292b6087a" Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.897478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n77vq" event={"ID":"0b5b025a-d78e-4728-b492-19846b3ad862","Type":"ContainerStarted","Data":"79529778c2336e3c15fab1c57f5d0ee077116ffba9a28156e72e5d308494fa3b"} Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.902693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerStarted","Data":"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b"} Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.906041 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerStarted","Data":"9dcc4dbe1b0cb716615c54962d6647b2899ba481cb959358fcbf1a1666020288"} Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.912830 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-klzzb" podUID="a1ae3c37-af29-4957-9648-52c28558591e" Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.911111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerStarted","Data":"482509ce41a7b1d2d4b846b022d0f0dd0021e59aa39d61a07dd8f491b30b6785"} Mar 14 09:01:59 crc kubenswrapper[4869]: E0314 09:01:59.914083 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6cz2t" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" Mar 14 09:01:59 crc kubenswrapper[4869]: I0314 09:01:59.989222 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 14 09:02:00 crc kubenswrapper[4869]: W0314 09:02:00.009408 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf893e7e_0007_4706_a328_f905eadbbe46.slice/crio-b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801 WatchSource:0}: Error finding container b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801: Status 404 returned error can't find the container with id b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801 Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.054230 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.091294 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.113895 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.138919 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557982-m47g2"] Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.139727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.142325 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.147438 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-m47g2"] Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.300940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8pq\" (UniqueName: \"kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq\") pod \"auto-csr-approver-29557982-m47g2\" (UID: \"28fc8bb0-4a61-40cf-809f-408035a85c2e\") " pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.402334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8pq\" (UniqueName: \"kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq\") pod \"auto-csr-approver-29557982-m47g2\" (UID: \"28fc8bb0-4a61-40cf-809f-408035a85c2e\") " pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.428215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8pq\" (UniqueName: \"kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq\") pod \"auto-csr-approver-29557982-m47g2\" (UID: \"28fc8bb0-4a61-40cf-809f-408035a85c2e\") " pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.460978 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.921657 4869 generic.go:334] "Generic (PLEG): container finished" podID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerID="cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b" exitCode=0 Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.921828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerDied","Data":"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.931199 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf893e7e-0007-4706-a328-f905eadbbe46","Type":"ContainerStarted","Data":"b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.949948 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zgn62" event={"ID":"33333edb-d3b9-49eb-acc4-bc014c8da396","Type":"ContainerStarted","Data":"7ac1f98c3da9c498ff71dbc477ae6f0179b80b1c18bae3e8cb99d700b7f033e6"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.950173 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.955164 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60550272-ba92-4d24-b14e-ffd342a86579","Type":"ContainerStarted","Data":"065f8467458db277eac17a8cbe3cb385bdd97f8b5f5bda954bd8c605d37ef4e8"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.956771 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.956840 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.967129 4869 generic.go:334] "Generic (PLEG): container finished" podID="25990a28-3536-4602-9439-666774908da0" containerID="d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954" exitCode=0 Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.967243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerDied","Data":"d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.978217 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n77vq" event={"ID":"0b5b025a-d78e-4728-b492-19846b3ad862","Type":"ContainerStarted","Data":"a8875779411a93ce1220f9fada9d9bbf2258cc615f9ed4193544db6d26d96a9f"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.987928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" event={"ID":"f957b5fc-0868-44c5-aee8-716147a9e18f","Type":"ContainerStarted","Data":"106380d0c7ed2bac0488b76df8d93cf06596554a8fe051adbf956c57b589fea4"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.990382 4869 generic.go:334] "Generic (PLEG): container finished" podID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerID="9dcc4dbe1b0cb716615c54962d6647b2899ba481cb959358fcbf1a1666020288" exitCode=0 Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.990462 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerDied","Data":"9dcc4dbe1b0cb716615c54962d6647b2899ba481cb959358fcbf1a1666020288"} Mar 14 09:02:00 crc kubenswrapper[4869]: I0314 09:02:00.995160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" event={"ID":"7778738b-c9ed-4188-b704-1fa40d0154fe","Type":"ContainerStarted","Data":"faa895ad48146ad74ea7bf091663e5d33f91a8e643d4f2e886bc710c4b9be989"} Mar 14 09:02:01 crc kubenswrapper[4869]: I0314 09:02:01.222135 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-m47g2"] Mar 14 09:02:01 crc kubenswrapper[4869]: W0314 09:02:01.231073 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28fc8bb0_4a61_40cf_809f_408035a85c2e.slice/crio-df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617 WatchSource:0}: Error finding container df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617: Status 404 returned error can't find the container with id df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617 Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.010093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n77vq" event={"ID":"0b5b025a-d78e-4728-b492-19846b3ad862","Type":"ContainerStarted","Data":"f8a00af19b617adb81cb83ecd097b2588e0c9e0d175c206c9f565b6e0f6117d3"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.014274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" event={"ID":"7778738b-c9ed-4188-b704-1fa40d0154fe","Type":"ContainerStarted","Data":"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.014455 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" podUID="7778738b-c9ed-4188-b704-1fa40d0154fe" containerName="route-controller-manager" containerID="cri-o://2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47" gracePeriod=30 Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.014564 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.018050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" event={"ID":"f957b5fc-0868-44c5-aee8-716147a9e18f","Type":"ContainerStarted","Data":"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.018193 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" podUID="f957b5fc-0868-44c5-aee8-716147a9e18f" containerName="controller-manager" containerID="cri-o://0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0" gracePeriod=30 Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.018354 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.025786 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.026786 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-m47g2" event={"ID":"28fc8bb0-4a61-40cf-809f-408035a85c2e","Type":"ContainerStarted","Data":"df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.028317 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.032707 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf893e7e-0007-4706-a328-f905eadbbe46" containerID="7d642fa45484e93467c93c19ecabbbb621a571d61f23c7bb54ea0ce9f86f420a" exitCode=0 Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.032814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf893e7e-0007-4706-a328-f905eadbbe46","Type":"ContainerDied","Data":"7d642fa45484e93467c93c19ecabbbb621a571d61f23c7bb54ea0ce9f86f420a"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.041580 4869 csr.go:261] certificate signing request csr-qqj6l is approved, waiting to be issued Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.042022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60550272-ba92-4d24-b14e-ffd342a86579","Type":"ContainerStarted","Data":"88a3503b96056dadeedef17e98871e82396194723758dc525d669ef27e66dcdb"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.042870 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n77vq" podStartSLOduration=211.042839862 podStartE2EDuration="3m31.042839862s" podCreationTimestamp="2026-03-14 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:02.041932449 +0000 UTC m=+275.014214512" watchObservedRunningTime="2026-03-14 09:02:02.042839862 +0000 UTC m=+275.015121915" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.044199 4869 csr.go:257] certificate signing request csr-qqj6l is issued Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.053886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" event={"ID":"db3ce98b-d0f8-4fda-84cb-390a11eb508e","Type":"ContainerStarted","Data":"f73a201c5709d0cb8fb9c9655cc45b3650362550d18c0d9a5e182e9b4a4863ba"} Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.054700 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zgn62 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.054765 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zgn62" podUID="33333edb-d3b9-49eb-acc4-bc014c8da396" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.070595 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" podStartSLOduration=28.07056985 podStartE2EDuration="28.07056985s" podCreationTimestamp="2026-03-14 09:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:02.061660505 +0000 UTC m=+275.033942558" watchObservedRunningTime="2026-03-14 09:02:02.07056985 +0000 UTC m=+275.042851923" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.115159 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" podStartSLOduration=28.11513879 podStartE2EDuration="28.11513879s" podCreationTimestamp="2026-03-14 09:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:02.113037318 +0000 UTC m=+275.085319371" watchObservedRunningTime="2026-03-14 09:02:02.11513879 +0000 UTC m=+275.087420843" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.148815 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" podStartSLOduration=70.497019801 podStartE2EDuration="2m2.148786777s" podCreationTimestamp="2026-03-14 09:00:00 +0000 UTC" firstStartedPulling="2026-03-14 09:01:09.110174351 +0000 UTC m=+222.082456404" lastFinishedPulling="2026-03-14 09:02:00.761941327 +0000 UTC m=+273.734223380" observedRunningTime="2026-03-14 09:02:02.147571937 +0000 UTC m=+275.119854010" watchObservedRunningTime="2026-03-14 09:02:02.148786777 +0000 UTC m=+275.121068850" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.170726 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.170678688 podStartE2EDuration="6.170678688s" podCreationTimestamp="2026-03-14 09:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:02.167738444 +0000 UTC m=+275.140020517" watchObservedRunningTime="2026-03-14 09:02:02.170678688 +0000 UTC m=+275.142960761" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.760408 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.770144 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.806340 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:02 crc kubenswrapper[4869]: E0314 09:02:02.806641 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f957b5fc-0868-44c5-aee8-716147a9e18f" containerName="controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.806654 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f957b5fc-0868-44c5-aee8-716147a9e18f" containerName="controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: E0314 09:02:02.806672 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7778738b-c9ed-4188-b704-1fa40d0154fe" containerName="route-controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.806683 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7778738b-c9ed-4188-b704-1fa40d0154fe" containerName="route-controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.806814 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f957b5fc-0868-44c5-aee8-716147a9e18f" containerName="controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.806832 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7778738b-c9ed-4188-b704-1fa40d0154fe" containerName="route-controller-manager" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.807272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.823612 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.964348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca\") pod \"7778738b-c9ed-4188-b704-1fa40d0154fe\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.965136 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6lfd\" (UniqueName: \"kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd\") pod \"f957b5fc-0868-44c5-aee8-716147a9e18f\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.965236 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert\") pod \"7778738b-c9ed-4188-b704-1fa40d0154fe\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.965987 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "7778738b-c9ed-4188-b704-1fa40d0154fe" (UID: "7778738b-c9ed-4188-b704-1fa40d0154fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966315 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert\") pod \"f957b5fc-0868-44c5-aee8-716147a9e18f\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966384 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles\") pod \"f957b5fc-0868-44c5-aee8-716147a9e18f\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config\") pod \"7778738b-c9ed-4188-b704-1fa40d0154fe\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhwnq\" (UniqueName: \"kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq\") pod \"7778738b-c9ed-4188-b704-1fa40d0154fe\" (UID: \"7778738b-c9ed-4188-b704-1fa40d0154fe\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config\") pod \"f957b5fc-0868-44c5-aee8-716147a9e18f\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966674 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca\") pod \"f957b5fc-0868-44c5-aee8-716147a9e18f\" (UID: \"f957b5fc-0868-44c5-aee8-716147a9e18f\") " Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.966979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f957b5fc-0868-44c5-aee8-716147a9e18f" (UID: "f957b5fc-0868-44c5-aee8-716147a9e18f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968706 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgnlk\" (UniqueName: \"kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.968945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config" (OuterVolumeSpecName: "config") pod "f957b5fc-0868-44c5-aee8-716147a9e18f" (UID: "f957b5fc-0868-44c5-aee8-716147a9e18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.969057 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.969075 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.969088 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.969314 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f957b5fc-0868-44c5-aee8-716147a9e18f" (UID: "f957b5fc-0868-44c5-aee8-716147a9e18f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.969442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config" (OuterVolumeSpecName: "config") pod "7778738b-c9ed-4188-b704-1fa40d0154fe" (UID: "7778738b-c9ed-4188-b704-1fa40d0154fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.972198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd" (OuterVolumeSpecName: "kube-api-access-q6lfd") pod "f957b5fc-0868-44c5-aee8-716147a9e18f" (UID: "f957b5fc-0868-44c5-aee8-716147a9e18f"). InnerVolumeSpecName "kube-api-access-q6lfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.972453 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7778738b-c9ed-4188-b704-1fa40d0154fe" (UID: "7778738b-c9ed-4188-b704-1fa40d0154fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.973032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f957b5fc-0868-44c5-aee8-716147a9e18f" (UID: "f957b5fc-0868-44c5-aee8-716147a9e18f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:02 crc kubenswrapper[4869]: I0314 09:02:02.975668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq" (OuterVolumeSpecName: "kube-api-access-xhwnq") pod "7778738b-c9ed-4188-b704-1fa40d0154fe" (UID: "7778738b-c9ed-4188-b704-1fa40d0154fe"). InnerVolumeSpecName "kube-api-access-xhwnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.046255 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 12:10:40.060772811 +0000 UTC Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.046300 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6819h8m37.014475965s for next certificate rotation Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.062826 4869 generic.go:334] "Generic (PLEG): container finished" podID="7778738b-c9ed-4188-b704-1fa40d0154fe" containerID="2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47" exitCode=0 Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.062906 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.062950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" event={"ID":"7778738b-c9ed-4188-b704-1fa40d0154fe","Type":"ContainerDied","Data":"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.063492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf" event={"ID":"7778738b-c9ed-4188-b704-1fa40d0154fe","Type":"ContainerDied","Data":"faa895ad48146ad74ea7bf091663e5d33f91a8e643d4f2e886bc710c4b9be989"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.063544 4869 scope.go:117] "RemoveContainer" containerID="2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070133 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070187 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgnlk\" (UniqueName: \"kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070332 4869 generic.go:334] "Generic (PLEG): container finished" podID="f957b5fc-0868-44c5-aee8-716147a9e18f" containerID="0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0" exitCode=0 Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070342 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6lfd\" (UniqueName: \"kubernetes.io/projected/f957b5fc-0868-44c5-aee8-716147a9e18f-kube-api-access-q6lfd\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070396 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7778738b-c9ed-4188-b704-1fa40d0154fe-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070408 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f957b5fc-0868-44c5-aee8-716147a9e18f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070418 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f957b5fc-0868-44c5-aee8-716147a9e18f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070428 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7778738b-c9ed-4188-b704-1fa40d0154fe-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070440 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhwnq\" (UniqueName: \"kubernetes.io/projected/7778738b-c9ed-4188-b704-1fa40d0154fe-kube-api-access-xhwnq\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.070494 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.071639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.071755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" event={"ID":"f957b5fc-0868-44c5-aee8-716147a9e18f","Type":"ContainerDied","Data":"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.072080 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868c95f6b9-442cx" event={"ID":"f957b5fc-0868-44c5-aee8-716147a9e18f","Type":"ContainerDied","Data":"106380d0c7ed2bac0488b76df8d93cf06596554a8fe051adbf956c57b589fea4"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.073561 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.073608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.077099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-m47g2" event={"ID":"28fc8bb0-4a61-40cf-809f-408035a85c2e","Type":"ContainerStarted","Data":"9c7696676a23e9ff081bb8cac2b959e068640f92185d39b2ce6d8912c35ed709"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.077830 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.081362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerStarted","Data":"785c369dbcd49eac9fe1721660cb32bc3ebfab628b10a2590dade17538a063c1"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.093446 4869 generic.go:334] "Generic (PLEG): container finished" podID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" containerID="f73a201c5709d0cb8fb9c9655cc45b3650362550d18c0d9a5e182e9b4a4863ba" exitCode=0 Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.094605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" event={"ID":"db3ce98b-d0f8-4fda-84cb-390a11eb508e","Type":"ContainerDied","Data":"f73a201c5709d0cb8fb9c9655cc45b3650362550d18c0d9a5e182e9b4a4863ba"} Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.099211 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgnlk\" (UniqueName: \"kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk\") pod \"controller-manager-d486fb69d-kg55t\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.103046 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557982-m47g2" podStartSLOduration=1.7186554950000001 podStartE2EDuration="3.103004093s" podCreationTimestamp="2026-03-14 09:02:00 +0000 UTC" firstStartedPulling="2026-03-14 09:02:01.234382572 +0000 UTC m=+274.206664625" lastFinishedPulling="2026-03-14 09:02:02.61873117 +0000 UTC m=+275.591013223" observedRunningTime="2026-03-14 09:02:03.098962702 +0000 UTC m=+276.071244765" watchObservedRunningTime="2026-03-14 09:02:03.103004093 +0000 UTC m=+276.075286156" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.106233 4869 scope.go:117] "RemoveContainer" containerID="2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47" Mar 14 09:02:03 crc kubenswrapper[4869]: E0314 09:02:03.106928 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47\": container with ID starting with 2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47 not found: ID does not exist" containerID="2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.106991 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47"} err="failed to get container status \"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47\": rpc error: code = NotFound desc = could not find container \"2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47\": container with ID starting with 2471f9c28c58b233314af13ecc28476f1a96eca1b32b08b35bca14d640edab47 not found: ID does not exist" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.107027 4869 scope.go:117] "RemoveContainer" containerID="0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.127630 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.155711 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.162389 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cfb685b4b-m46xf"] Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.178407 4869 scope.go:117] "RemoveContainer" containerID="0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0" Mar 14 09:02:03 crc kubenswrapper[4869]: E0314 09:02:03.179825 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0\": container with ID starting with 0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0 not found: ID does not exist" containerID="0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.179875 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0"} err="failed to get container status \"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0\": rpc error: code = NotFound desc = could not find container \"0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0\": container with ID starting with 0c58882ff28d74979469737c1572f633de0fafefc96e028d1277b5e1989ee6e0 not found: ID does not exist" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.195558 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.199900 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-868c95f6b9-442cx"] Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.357656 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.375182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir\") pod \"cf893e7e-0007-4706-a328-f905eadbbe46\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.375288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access\") pod \"cf893e7e-0007-4706-a328-f905eadbbe46\" (UID: \"cf893e7e-0007-4706-a328-f905eadbbe46\") " Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.375396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf893e7e-0007-4706-a328-f905eadbbe46" (UID: "cf893e7e-0007-4706-a328-f905eadbbe46"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.376755 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf893e7e-0007-4706-a328-f905eadbbe46-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.382042 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.383685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf893e7e-0007-4706-a328-f905eadbbe46" (UID: "cf893e7e-0007-4706-a328-f905eadbbe46"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:03 crc kubenswrapper[4869]: W0314 09:02:03.397397 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e1bcfa_21f3_446f_aa5c_38f6abce60b2.slice/crio-83d3f5e2951a7643571f831bb10a87e34b5e69132b1c27292bb1a6dba8dc2d48 WatchSource:0}: Error finding container 83d3f5e2951a7643571f831bb10a87e34b5e69132b1c27292bb1a6dba8dc2d48: Status 404 returned error can't find the container with id 83d3f5e2951a7643571f831bb10a87e34b5e69132b1c27292bb1a6dba8dc2d48 Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.478066 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf893e7e-0007-4706-a328-f905eadbbe46-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.712509 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7778738b-c9ed-4188-b704-1fa40d0154fe" path="/var/lib/kubelet/pods/7778738b-c9ed-4188-b704-1fa40d0154fe/volumes" Mar 14 09:02:03 crc kubenswrapper[4869]: I0314 09:02:03.713320 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f957b5fc-0868-44c5-aee8-716147a9e18f" path="/var/lib/kubelet/pods/f957b5fc-0868-44c5-aee8-716147a9e18f/volumes" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.046870 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 23:05:14.505398229 +0000 UTC Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.046930 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6158h3m10.458476259s for next certificate rotation Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.114049 4869 generic.go:334] "Generic (PLEG): container finished" podID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerID="785c369dbcd49eac9fe1721660cb32bc3ebfab628b10a2590dade17538a063c1" exitCode=0 Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.114161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerDied","Data":"785c369dbcd49eac9fe1721660cb32bc3ebfab628b10a2590dade17538a063c1"} Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.121291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cf893e7e-0007-4706-a328-f905eadbbe46","Type":"ContainerDied","Data":"b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801"} Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.121339 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1fa5e2583d5765e59de0cfa1c72c61c3cb8e0f71cc2247de757b672b8186801" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.121396 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.126799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" event={"ID":"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2","Type":"ContainerStarted","Data":"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0"} Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.126829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" event={"ID":"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2","Type":"ContainerStarted","Data":"83d3f5e2951a7643571f831bb10a87e34b5e69132b1c27292bb1a6dba8dc2d48"} Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.126986 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.131879 4869 generic.go:334] "Generic (PLEG): container finished" podID="28fc8bb0-4a61-40cf-809f-408035a85c2e" containerID="9c7696676a23e9ff081bb8cac2b959e068640f92185d39b2ce6d8912c35ed709" exitCode=0 Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.132100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-m47g2" event={"ID":"28fc8bb0-4a61-40cf-809f-408035a85c2e","Type":"ContainerDied","Data":"9c7696676a23e9ff081bb8cac2b959e068640f92185d39b2ce6d8912c35ed709"} Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.134619 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.212569 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" podStartSLOduration=10.212508926 podStartE2EDuration="10.212508926s" podCreationTimestamp="2026-03-14 09:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:04.212167637 +0000 UTC m=+277.184449700" watchObservedRunningTime="2026-03-14 09:02:04.212508926 +0000 UTC m=+277.184790989" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.528921 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.597518 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxjk\" (UniqueName: \"kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk\") pod \"db3ce98b-d0f8-4fda-84cb-390a11eb508e\" (UID: \"db3ce98b-d0f8-4fda-84cb-390a11eb508e\") " Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.624926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk" (OuterVolumeSpecName: "kube-api-access-xvxjk") pod "db3ce98b-d0f8-4fda-84cb-390a11eb508e" (UID: "db3ce98b-d0f8-4fda-84cb-390a11eb508e"). InnerVolumeSpecName "kube-api-access-xvxjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.699058 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxjk\" (UniqueName: \"kubernetes.io/projected/db3ce98b-d0f8-4fda-84cb-390a11eb508e-kube-api-access-xvxjk\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.874910 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:04 crc kubenswrapper[4869]: E0314 09:02:04.875889 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" containerName="oc" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.875907 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" containerName="oc" Mar 14 09:02:04 crc kubenswrapper[4869]: E0314 09:02:04.875915 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf893e7e-0007-4706-a328-f905eadbbe46" containerName="pruner" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.875922 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf893e7e-0007-4706-a328-f905eadbbe46" containerName="pruner" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.876027 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" containerName="oc" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.876060 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf893e7e-0007-4706-a328-f905eadbbe46" containerName="pruner" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.876852 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.879551 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.879790 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.880012 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.880168 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.880782 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.881254 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.889106 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.902942 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9pmk\" (UniqueName: \"kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.903085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.903135 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:04 crc kubenswrapper[4869]: I0314 09:02:04.903192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.004719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.004796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.005771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9pmk\" (UniqueName: \"kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.005862 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.006076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.007862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.016420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.035009 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9pmk\" (UniqueName: \"kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk\") pod \"route-controller-manager-57fb4ff849-zg992\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.142436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" event={"ID":"db3ce98b-d0f8-4fda-84cb-390a11eb508e","Type":"ContainerDied","Data":"9400c16561ce2d610a5c770a3716236743a27b4b9214af7f317a5465ff337903"} Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.142563 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9400c16561ce2d610a5c770a3716236743a27b4b9214af7f317a5465ff337903" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.142566 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557980-9t5kk" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.206999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.426794 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.618229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8pq\" (UniqueName: \"kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq\") pod \"28fc8bb0-4a61-40cf-809f-408035a85c2e\" (UID: \"28fc8bb0-4a61-40cf-809f-408035a85c2e\") " Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.623408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq" (OuterVolumeSpecName: "kube-api-access-9l8pq") pod "28fc8bb0-4a61-40cf-809f-408035a85c2e" (UID: "28fc8bb0-4a61-40cf-809f-408035a85c2e"). InnerVolumeSpecName "kube-api-access-9l8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.641649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:05 crc kubenswrapper[4869]: W0314 09:02:05.651261 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07fe2a15_a6af_4104_8cc0_6ee9485e0c86.slice/crio-44d0d5a245ceb583a45fe8f84d6b0be8a5102c9b9c36f91f5ff2ea0626c381aa WatchSource:0}: Error finding container 44d0d5a245ceb583a45fe8f84d6b0be8a5102c9b9c36f91f5ff2ea0626c381aa: Status 404 returned error can't find the container with id 44d0d5a245ceb583a45fe8f84d6b0be8a5102c9b9c36f91f5ff2ea0626c381aa Mar 14 09:02:05 crc kubenswrapper[4869]: I0314 09:02:05.719822 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l8pq\" (UniqueName: \"kubernetes.io/projected/28fc8bb0-4a61-40cf-809f-408035a85c2e-kube-api-access-9l8pq\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:06 crc kubenswrapper[4869]: I0314 09:02:06.159214 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557982-m47g2" Mar 14 09:02:06 crc kubenswrapper[4869]: I0314 09:02:06.159206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557982-m47g2" event={"ID":"28fc8bb0-4a61-40cf-809f-408035a85c2e","Type":"ContainerDied","Data":"df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617"} Mar 14 09:02:06 crc kubenswrapper[4869]: I0314 09:02:06.159929 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df11f2bb98a147ea479b37c6708ae60305780c84b802fa3d0af3a9fbc750c617" Mar 14 09:02:06 crc kubenswrapper[4869]: I0314 09:02:06.163163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" event={"ID":"07fe2a15-a6af-4104-8cc0-6ee9485e0c86","Type":"ContainerStarted","Data":"5159d740cd40ec1dd7182c0eeeb18b0b53fe17ed986d8d2539daffb0192414da"} Mar 14 09:02:06 crc kubenswrapper[4869]: I0314 09:02:06.163264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" event={"ID":"07fe2a15-a6af-4104-8cc0-6ee9485e0c86","Type":"ContainerStarted","Data":"44d0d5a245ceb583a45fe8f84d6b0be8a5102c9b9c36f91f5ff2ea0626c381aa"} Mar 14 09:02:07 crc kubenswrapper[4869]: I0314 09:02:07.170912 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:07 crc kubenswrapper[4869]: I0314 09:02:07.177689 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:07 crc kubenswrapper[4869]: I0314 09:02:07.196747 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" podStartSLOduration=13.196724793 podStartE2EDuration="13.196724793s" podCreationTimestamp="2026-03-14 09:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:07.188807243 +0000 UTC m=+280.161089296" watchObservedRunningTime="2026-03-14 09:02:07.196724793 +0000 UTC m=+280.169006846" Mar 14 09:02:07 crc kubenswrapper[4869]: I0314 09:02:07.504609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zgn62" Mar 14 09:02:09 crc kubenswrapper[4869]: I0314 09:02:09.605485 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:09.606138 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:09.606239 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:09.607468 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:09.607608 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b" gracePeriod=600 Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:11.210481 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b" exitCode=0 Mar 14 09:02:16 crc kubenswrapper[4869]: I0314 09:02:11.210605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b"} Mar 14 09:02:19 crc kubenswrapper[4869]: E0314 09:02:19.223734 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 09:02:19 crc kubenswrapper[4869]: E0314 09:02:19.224840 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9fph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wt8jx_openshift-marketplace(25990a28-3536-4602-9439-666774908da0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:02:19 crc kubenswrapper[4869]: E0314 09:02:19.226217 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wt8jx" podUID="25990a28-3536-4602-9439-666774908da0" Mar 14 09:02:19 crc kubenswrapper[4869]: I0314 09:02:19.288216 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0"} Mar 14 09:02:19 crc kubenswrapper[4869]: I0314 09:02:19.292695 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerStarted","Data":"9167aaf586c6086e5c6ce97a5ba3a4a9b3a64308459a1b4479a44c1632a6d398"} Mar 14 09:02:19 crc kubenswrapper[4869]: E0314 09:02:19.294936 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wt8jx" podUID="25990a28-3536-4602-9439-666774908da0" Mar 14 09:02:20 crc kubenswrapper[4869]: I0314 09:02:20.356880 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxrzk" podStartSLOduration=43.988210836 podStartE2EDuration="1m1.356861273s" podCreationTimestamp="2026-03-14 09:01:19 +0000 UTC" firstStartedPulling="2026-03-14 09:02:00.993958545 +0000 UTC m=+273.966240598" lastFinishedPulling="2026-03-14 09:02:18.362608972 +0000 UTC m=+291.334891035" observedRunningTime="2026-03-14 09:02:20.353697124 +0000 UTC m=+293.325979187" watchObservedRunningTime="2026-03-14 09:02:20.356861273 +0000 UTC m=+293.329143316" Mar 14 09:02:20 crc kubenswrapper[4869]: E0314 09:02:20.453054 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 14 09:02:20 crc kubenswrapper[4869]: E0314 09:02:20.453293 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tqlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sfjjg_openshift-marketplace(49ae5a4f-b968-45b6-8f1a-2a96b7af34b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 14 09:02:20 crc kubenswrapper[4869]: E0314 09:02:20.454526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sfjjg" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" Mar 14 09:02:21 crc kubenswrapper[4869]: E0314 09:02:21.329811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sfjjg" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.319410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerDied","Data":"263687882e23e808b8e65e1a4e427e4bdc98bf063474d2201816e1b5929decb3"} Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.319324 4869 generic.go:334] "Generic (PLEG): container finished" podID="0bb7315d-59e6-4f41-a983-700a083a75af" containerID="263687882e23e808b8e65e1a4e427e4bdc98bf063474d2201816e1b5929decb3" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.324283 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1ae3c37-af29-4957-9648-52c28558591e" containerID="f2c1be36ecae5ccb1c4994b623929988abe427d62839937983de56acbb79b0fe" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.324352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerDied","Data":"f2c1be36ecae5ccb1c4994b623929988abe427d62839937983de56acbb79b0fe"} Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.329519 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerID="47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.329580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerDied","Data":"47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0"} Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.332387 4869 generic.go:334] "Generic (PLEG): container finished" podID="8466d496-2ca4-49f2-96ff-75386b047783" containerID="95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.332479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerDied","Data":"95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952"} Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.334267 4869 generic.go:334] "Generic (PLEG): container finished" podID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerID="2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9" exitCode=0 Mar 14 09:02:22 crc kubenswrapper[4869]: I0314 09:02:22.334310 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerDied","Data":"2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.343734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerStarted","Data":"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.350369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerStarted","Data":"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.357331 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerStarted","Data":"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.360391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerStarted","Data":"b097135e2794b7f1e369dfbd17075ff124f901de020e95885bc9b4509839a55c"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.363238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerStarted","Data":"6185e3d9d1e751d9937dccb690f585c48e5ac97f77366792ee0561917e4bda81"} Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.372317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6cz2t" podStartSLOduration=10.666305972 podStartE2EDuration="1m7.372297095s" podCreationTimestamp="2026-03-14 09:01:16 +0000 UTC" firstStartedPulling="2026-03-14 09:01:26.045300509 +0000 UTC m=+239.017582562" lastFinishedPulling="2026-03-14 09:02:22.751291632 +0000 UTC m=+295.723573685" observedRunningTime="2026-03-14 09:02:23.369400632 +0000 UTC m=+296.341682685" watchObservedRunningTime="2026-03-14 09:02:23.372297095 +0000 UTC m=+296.344579148" Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.394113 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xwqm" podStartSLOduration=9.670705852 podStartE2EDuration="1m6.394090883s" podCreationTimestamp="2026-03-14 09:01:17 +0000 UTC" firstStartedPulling="2026-03-14 09:01:26.044655552 +0000 UTC m=+239.016937605" lastFinishedPulling="2026-03-14 09:02:22.768040583 +0000 UTC m=+295.740322636" observedRunningTime="2026-03-14 09:02:23.392015551 +0000 UTC m=+296.364297604" watchObservedRunningTime="2026-03-14 09:02:23.394090883 +0000 UTC m=+296.366372936" Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.413599 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-klzzb" podStartSLOduration=9.616733655 podStartE2EDuration="1m6.413573323s" podCreationTimestamp="2026-03-14 09:01:17 +0000 UTC" firstStartedPulling="2026-03-14 09:01:26.044195181 +0000 UTC m=+239.016477234" lastFinishedPulling="2026-03-14 09:02:22.841034839 +0000 UTC m=+295.813316902" observedRunningTime="2026-03-14 09:02:23.412037595 +0000 UTC m=+296.384319648" watchObservedRunningTime="2026-03-14 09:02:23.413573323 +0000 UTC m=+296.385855376" Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.431690 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94926" podStartSLOduration=2.746650241 podStartE2EDuration="1m7.431669648s" podCreationTimestamp="2026-03-14 09:01:16 +0000 UTC" firstStartedPulling="2026-03-14 09:01:18.129885105 +0000 UTC m=+231.102167158" lastFinishedPulling="2026-03-14 09:02:22.814904502 +0000 UTC m=+295.787186565" observedRunningTime="2026-03-14 09:02:23.428849898 +0000 UTC m=+296.401131951" watchObservedRunningTime="2026-03-14 09:02:23.431669648 +0000 UTC m=+296.403951701" Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.652885 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9kv6g" podStartSLOduration=8.748238543 podStartE2EDuration="1m5.652860933s" podCreationTimestamp="2026-03-14 09:01:18 +0000 UTC" firstStartedPulling="2026-03-14 09:01:26.04932607 +0000 UTC m=+239.021608123" lastFinishedPulling="2026-03-14 09:02:22.95394846 +0000 UTC m=+295.926230513" observedRunningTime="2026-03-14 09:02:23.464670079 +0000 UTC m=+296.436952132" watchObservedRunningTime="2026-03-14 09:02:23.652860933 +0000 UTC m=+296.625142986" Mar 14 09:02:23 crc kubenswrapper[4869]: I0314 09:02:23.655177 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c25vk"] Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.045592 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94926" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.048777 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94926" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.254214 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.254380 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.363501 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.365152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94926" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.439987 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.497265 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.498544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.539140 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.727137 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.727204 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:27 crc kubenswrapper[4869]: I0314 09:02:27.777812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:28 crc kubenswrapper[4869]: I0314 09:02:28.447775 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:28 crc kubenswrapper[4869]: I0314 09:02:28.458680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94926" Mar 14 09:02:28 crc kubenswrapper[4869]: I0314 09:02:28.459613 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.256554 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.256991 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.325204 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.449788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.525997 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.726684 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.809752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.809833 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:29 crc kubenswrapper[4869]: I0314 09:02:29.850019 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:30 crc kubenswrapper[4869]: I0314 09:02:30.412206 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-klzzb" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="registry-server" containerID="cri-o://6185e3d9d1e751d9937dccb690f585c48e5ac97f77366792ee0561917e4bda81" gracePeriod=2 Mar 14 09:02:30 crc kubenswrapper[4869]: I0314 09:02:30.507868 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:31 crc kubenswrapper[4869]: I0314 09:02:31.420851 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1ae3c37-af29-4957-9648-52c28558591e" containerID="6185e3d9d1e751d9937dccb690f585c48e5ac97f77366792ee0561917e4bda81" exitCode=0 Mar 14 09:02:31 crc kubenswrapper[4869]: I0314 09:02:31.420936 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerDied","Data":"6185e3d9d1e751d9937dccb690f585c48e5ac97f77366792ee0561917e4bda81"} Mar 14 09:02:31 crc kubenswrapper[4869]: I0314 09:02:31.421797 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xwqm" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="registry-server" containerID="cri-o://b097135e2794b7f1e369dfbd17075ff124f901de020e95885bc9b4509839a55c" gracePeriod=2 Mar 14 09:02:31 crc kubenswrapper[4869]: I0314 09:02:31.926152 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:02:31 crc kubenswrapper[4869]: I0314 09:02:31.927165 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.092450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content\") pod \"a1ae3c37-af29-4957-9648-52c28558591e\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.092641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsq2f\" (UniqueName: \"kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f\") pod \"a1ae3c37-af29-4957-9648-52c28558591e\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.092716 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities\") pod \"a1ae3c37-af29-4957-9648-52c28558591e\" (UID: \"a1ae3c37-af29-4957-9648-52c28558591e\") " Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.103524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities" (OuterVolumeSpecName: "utilities") pod "a1ae3c37-af29-4957-9648-52c28558591e" (UID: "a1ae3c37-af29-4957-9648-52c28558591e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.112769 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f" (OuterVolumeSpecName: "kube-api-access-vsq2f") pod "a1ae3c37-af29-4957-9648-52c28558591e" (UID: "a1ae3c37-af29-4957-9648-52c28558591e"). InnerVolumeSpecName "kube-api-access-vsq2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.175190 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1ae3c37-af29-4957-9648-52c28558591e" (UID: "a1ae3c37-af29-4957-9648-52c28558591e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.195721 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsq2f\" (UniqueName: \"kubernetes.io/projected/a1ae3c37-af29-4957-9648-52c28558591e-kube-api-access-vsq2f\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.195757 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.195770 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1ae3c37-af29-4957-9648-52c28558591e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.432846 4869 generic.go:334] "Generic (PLEG): container finished" podID="0bb7315d-59e6-4f41-a983-700a083a75af" containerID="b097135e2794b7f1e369dfbd17075ff124f901de020e95885bc9b4509839a55c" exitCode=0 Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.433210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerDied","Data":"b097135e2794b7f1e369dfbd17075ff124f901de020e95885bc9b4509839a55c"} Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.436139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klzzb" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.436199 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klzzb" event={"ID":"a1ae3c37-af29-4957-9648-52c28558591e","Type":"ContainerDied","Data":"8fdcacfc466131ce5fe25e6866d191382ac345315168e196737da93a3ea85d66"} Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.436248 4869 scope.go:117] "RemoveContainer" containerID="6185e3d9d1e751d9937dccb690f585c48e5ac97f77366792ee0561917e4bda81" Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.436549 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxrzk" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="registry-server" containerID="cri-o://9167aaf586c6086e5c6ce97a5ba3a4a9b3a64308459a1b4479a44c1632a6d398" gracePeriod=2 Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.473744 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:02:32 crc kubenswrapper[4869]: I0314 09:02:32.476905 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-klzzb"] Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.445398 4869 generic.go:334] "Generic (PLEG): container finished" podID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerID="9167aaf586c6086e5c6ce97a5ba3a4a9b3a64308459a1b4479a44c1632a6d398" exitCode=0 Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.445483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerDied","Data":"9167aaf586c6086e5c6ce97a5ba3a4a9b3a64308459a1b4479a44c1632a6d398"} Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.711199 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ae3c37-af29-4957-9648-52c28558591e" path="/var/lib/kubelet/pods/a1ae3c37-af29-4957-9648-52c28558591e/volumes" Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.770167 4869 scope.go:117] "RemoveContainer" containerID="f2c1be36ecae5ccb1c4994b623929988abe427d62839937983de56acbb79b0fe" Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.838813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:33 crc kubenswrapper[4869]: I0314 09:02:33.995577 4869 scope.go:117] "RemoveContainer" containerID="7b3b16e6bd5d757fc8487a7b9b407963c46d20c8aec95e0b9e21f6b142ceabb0" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.024665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities\") pod \"0bb7315d-59e6-4f41-a983-700a083a75af\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.024721 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content\") pod \"0bb7315d-59e6-4f41-a983-700a083a75af\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.024840 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cjb9\" (UniqueName: \"kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9\") pod \"0bb7315d-59e6-4f41-a983-700a083a75af\" (UID: \"0bb7315d-59e6-4f41-a983-700a083a75af\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.025940 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities" (OuterVolumeSpecName: "utilities") pod "0bb7315d-59e6-4f41-a983-700a083a75af" (UID: "0bb7315d-59e6-4f41-a983-700a083a75af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.037859 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9" (OuterVolumeSpecName: "kube-api-access-2cjb9") pod "0bb7315d-59e6-4f41-a983-700a083a75af" (UID: "0bb7315d-59e6-4f41-a983-700a083a75af"). InnerVolumeSpecName "kube-api-access-2cjb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.085378 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bb7315d-59e6-4f41-a983-700a083a75af" (UID: "0bb7315d-59e6-4f41-a983-700a083a75af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.126159 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cjb9\" (UniqueName: \"kubernetes.io/projected/0bb7315d-59e6-4f41-a983-700a083a75af-kube-api-access-2cjb9\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.126194 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.126224 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bb7315d-59e6-4f41-a983-700a083a75af-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.265449 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.266169 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" podUID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" containerName="controller-manager" containerID="cri-o://bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0" gracePeriod=30 Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.293742 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.356395 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.356971 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerName="route-controller-manager" containerID="cri-o://5159d740cd40ec1dd7182c0eeeb18b0b53fe17ed986d8d2539daffb0192414da" gracePeriod=30 Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.431132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities\") pod \"5afc16e4-c9b7-493a-be94-02e5f318c725\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.431188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content\") pod \"5afc16e4-c9b7-493a-be94-02e5f318c725\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.431318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzgmx\" (UniqueName: \"kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx\") pod \"5afc16e4-c9b7-493a-be94-02e5f318c725\" (UID: \"5afc16e4-c9b7-493a-be94-02e5f318c725\") " Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.432280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities" (OuterVolumeSpecName: "utilities") pod "5afc16e4-c9b7-493a-be94-02e5f318c725" (UID: "5afc16e4-c9b7-493a-be94-02e5f318c725"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.435011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx" (OuterVolumeSpecName: "kube-api-access-hzgmx") pod "5afc16e4-c9b7-493a-be94-02e5f318c725" (UID: "5afc16e4-c9b7-493a-be94-02e5f318c725"). InnerVolumeSpecName "kube-api-access-hzgmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.455078 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xwqm" event={"ID":"0bb7315d-59e6-4f41-a983-700a083a75af","Type":"ContainerDied","Data":"6d20b6d4fb035727306883a419030fe63fdf3c43940200a805a3d1f7d525c05a"} Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.455140 4869 scope.go:117] "RemoveContainer" containerID="b097135e2794b7f1e369dfbd17075ff124f901de020e95885bc9b4509839a55c" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.455148 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xwqm" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.458600 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5afc16e4-c9b7-493a-be94-02e5f318c725" (UID: "5afc16e4-c9b7-493a-be94-02e5f318c725"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.461700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxrzk" event={"ID":"5afc16e4-c9b7-493a-be94-02e5f318c725","Type":"ContainerDied","Data":"cc6ea94056a27b82b9b3d51862876106347bc8717eecf8f206b2442d9766747d"} Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.461774 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxrzk" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.475433 4869 scope.go:117] "RemoveContainer" containerID="263687882e23e808b8e65e1a4e427e4bdc98bf063474d2201816e1b5929decb3" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.495858 4869 scope.go:117] "RemoveContainer" containerID="bd315f815ebbc994d8577dea670b68bbf1cc964e9f6c5bdcbed08f35454a5155" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.503421 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.508927 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xwqm"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.520741 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.524303 4869 scope.go:117] "RemoveContainer" containerID="9167aaf586c6086e5c6ce97a5ba3a4a9b3a64308459a1b4479a44c1632a6d398" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.525955 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxrzk"] Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.533563 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzgmx\" (UniqueName: \"kubernetes.io/projected/5afc16e4-c9b7-493a-be94-02e5f318c725-kube-api-access-hzgmx\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.533609 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.533622 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afc16e4-c9b7-493a-be94-02e5f318c725-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.540736 4869 scope.go:117] "RemoveContainer" containerID="785c369dbcd49eac9fe1721660cb32bc3ebfab628b10a2590dade17538a063c1" Mar 14 09:02:34 crc kubenswrapper[4869]: I0314 09:02:34.566778 4869 scope.go:117] "RemoveContainer" containerID="9dcc4dbe1b0cb716615c54962d6647b2899ba481cb959358fcbf1a1666020288" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.209540 4869 patch_prober.go:28] interesting pod/route-controller-manager-57fb4ff849-zg992 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.210185 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.401759 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.471698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerStarted","Data":"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620"} Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.474090 4869 generic.go:334] "Generic (PLEG): container finished" podID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" containerID="bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0" exitCode=0 Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.474137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" event={"ID":"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2","Type":"ContainerDied","Data":"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0"} Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.474155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" event={"ID":"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2","Type":"ContainerDied","Data":"83d3f5e2951a7643571f831bb10a87e34b5e69132b1c27292bb1a6dba8dc2d48"} Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.474175 4869 scope.go:117] "RemoveContainer" containerID="bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.474238 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d486fb69d-kg55t" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.477943 4869 generic.go:334] "Generic (PLEG): container finished" podID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerID="5159d740cd40ec1dd7182c0eeeb18b0b53fe17ed986d8d2539daffb0192414da" exitCode=0 Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.478017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" event={"ID":"07fe2a15-a6af-4104-8cc0-6ee9485e0c86","Type":"ContainerDied","Data":"5159d740cd40ec1dd7182c0eeeb18b0b53fe17ed986d8d2539daffb0192414da"} Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.482920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerStarted","Data":"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0"} Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.497716 4869 scope.go:117] "RemoveContainer" containerID="bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.498350 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0\": container with ID starting with bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0 not found: ID does not exist" containerID="bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.498557 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0"} err="failed to get container status \"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0\": rpc error: code = NotFound desc = could not find container \"bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0\": container with ID starting with bff124b0dcbd5b23b6ebe87de520a00d676168b6e3ecf43ee5bb4e7335c265c0 not found: ID does not exist" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.548821 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert\") pod \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.548912 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgnlk\" (UniqueName: \"kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk\") pod \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.548937 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles\") pod \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.549078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config\") pod \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.549114 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca\") pod \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\" (UID: \"d0e1bcfa-21f3-446f-aa5c-38f6abce60b2\") " Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.550017 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" (UID: "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.550037 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca" (OuterVolumeSpecName: "client-ca") pod "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" (UID: "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.550214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config" (OuterVolumeSpecName: "config") pod "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" (UID: "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.556799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" (UID: "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.557005 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk" (OuterVolumeSpecName: "kube-api-access-jgnlk") pod "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" (UID: "d0e1bcfa-21f3-446f-aa5c-38f6abce60b2"). InnerVolumeSpecName "kube-api-access-jgnlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.651090 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgnlk\" (UniqueName: \"kubernetes.io/projected/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-kube-api-access-jgnlk\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.651145 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.651160 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.651172 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.651184 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.717994 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" path="/var/lib/kubelet/pods/0bb7315d-59e6-4f41-a983-700a083a75af/volumes" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.718840 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" path="/var/lib/kubelet/pods/5afc16e4-c9b7-493a-be94-02e5f318c725/volumes" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.796448 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.806536 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d486fb69d-kg55t"] Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896422 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896707 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896723 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896735 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896743 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896760 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896769 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896789 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896797 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896815 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896827 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" containerName="controller-manager" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896837 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" containerName="controller-manager" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896848 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896857 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896866 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896875 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="extract-utilities" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896886 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896894 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="extract-content" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896908 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896916 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: E0314 09:02:35.896927 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fc8bb0-4a61-40cf-809f-408035a85c2e" containerName="oc" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.896935 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fc8bb0-4a61-40cf-809f-408035a85c2e" containerName="oc" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897057 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afc16e4-c9b7-493a-be94-02e5f318c725" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897074 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fc8bb0-4a61-40cf-809f-408035a85c2e" containerName="oc" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897088 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" containerName="controller-manager" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897101 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb7315d-59e6-4f41-a983-700a083a75af" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897115 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae3c37-af29-4957-9648-52c28558591e" containerName="registry-server" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.897675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.905282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.905832 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.906142 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.906343 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.906831 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.906870 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.909485 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:02:35 crc kubenswrapper[4869]: I0314 09:02:35.930950 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.061992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqlwf\" (UniqueName: \"kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.062325 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.062429 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.062642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.062795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.163771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqlwf\" (UniqueName: \"kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.163831 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.163860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.163878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.163910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.164885 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.166387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.166407 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.167117 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.168414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.189064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqlwf\" (UniqueName: \"kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf\") pod \"controller-manager-7c764f6b9-5g8qb\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.252044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.266341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca\") pod \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.267112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config\") pod \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.267166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert\") pod \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.267250 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9pmk\" (UniqueName: \"kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk\") pod \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\" (UID: \"07fe2a15-a6af-4104-8cc0-6ee9485e0c86\") " Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.268443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca" (OuterVolumeSpecName: "client-ca") pod "07fe2a15-a6af-4104-8cc0-6ee9485e0c86" (UID: "07fe2a15-a6af-4104-8cc0-6ee9485e0c86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.268572 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config" (OuterVolumeSpecName: "config") pod "07fe2a15-a6af-4104-8cc0-6ee9485e0c86" (UID: "07fe2a15-a6af-4104-8cc0-6ee9485e0c86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.271221 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07fe2a15-a6af-4104-8cc0-6ee9485e0c86" (UID: "07fe2a15-a6af-4104-8cc0-6ee9485e0c86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.271357 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk" (OuterVolumeSpecName: "kube-api-access-w9pmk") pod "07fe2a15-a6af-4104-8cc0-6ee9485e0c86" (UID: "07fe2a15-a6af-4104-8cc0-6ee9485e0c86"). InnerVolumeSpecName "kube-api-access-w9pmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.369037 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.369080 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.369089 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.369100 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9pmk\" (UniqueName: \"kubernetes.io/projected/07fe2a15-a6af-4104-8cc0-6ee9485e0c86-kube-api-access-w9pmk\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.446864 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:02:36 crc kubenswrapper[4869]: W0314 09:02:36.451540 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacef2516_cdf2_4e58_bae8_00290015b684.slice/crio-e929b58d357cbc461a115732f35a6578f5e59b2801b39adbaf3736dda43dd227 WatchSource:0}: Error finding container e929b58d357cbc461a115732f35a6578f5e59b2801b39adbaf3736dda43dd227: Status 404 returned error can't find the container with id e929b58d357cbc461a115732f35a6578f5e59b2801b39adbaf3736dda43dd227 Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.491278 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" event={"ID":"acef2516-cdf2-4e58-bae8-00290015b684","Type":"ContainerStarted","Data":"e929b58d357cbc461a115732f35a6578f5e59b2801b39adbaf3736dda43dd227"} Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.493627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" event={"ID":"07fe2a15-a6af-4104-8cc0-6ee9485e0c86","Type":"ContainerDied","Data":"44d0d5a245ceb583a45fe8f84d6b0be8a5102c9b9c36f91f5ff2ea0626c381aa"} Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.493678 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.493681 4869 scope.go:117] "RemoveContainer" containerID="5159d740cd40ec1dd7182c0eeeb18b0b53fe17ed986d8d2539daffb0192414da" Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.497216 4869 generic.go:334] "Generic (PLEG): container finished" podID="25990a28-3536-4602-9439-666774908da0" containerID="5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0" exitCode=0 Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.497315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerDied","Data":"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0"} Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.506245 4869 generic.go:334] "Generic (PLEG): container finished" podID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerID="0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620" exitCode=0 Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.506292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerDied","Data":"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620"} Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.559938 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:36 crc kubenswrapper[4869]: I0314 09:02:36.562986 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57fb4ff849-zg992"] Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.538271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" event={"ID":"acef2516-cdf2-4e58-bae8-00290015b684","Type":"ContainerStarted","Data":"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00"} Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.539186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.547117 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.556002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerStarted","Data":"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55"} Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.562661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerStarted","Data":"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511"} Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.564224 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" podStartSLOduration=3.564200483 podStartE2EDuration="3.564200483s" podCreationTimestamp="2026-03-14 09:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:37.561829163 +0000 UTC m=+310.534111226" watchObservedRunningTime="2026-03-14 09:02:37.564200483 +0000 UTC m=+310.536482546" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.595947 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wt8jx" podStartSLOduration=42.548861184 podStartE2EDuration="1m18.59592451s" podCreationTimestamp="2026-03-14 09:01:19 +0000 UTC" firstStartedPulling="2026-03-14 09:02:00.97471484 +0000 UTC m=+273.946996893" lastFinishedPulling="2026-03-14 09:02:37.021778166 +0000 UTC m=+309.994060219" observedRunningTime="2026-03-14 09:02:37.593740806 +0000 UTC m=+310.566022879" watchObservedRunningTime="2026-03-14 09:02:37.59592451 +0000 UTC m=+310.568206553" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.654135 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sfjjg" podStartSLOduration=41.678853345 podStartE2EDuration="1m17.654110675s" podCreationTimestamp="2026-03-14 09:01:20 +0000 UTC" firstStartedPulling="2026-03-14 09:02:00.927783719 +0000 UTC m=+273.900065772" lastFinishedPulling="2026-03-14 09:02:36.903041049 +0000 UTC m=+309.875323102" observedRunningTime="2026-03-14 09:02:37.650965346 +0000 UTC m=+310.623247399" watchObservedRunningTime="2026-03-14 09:02:37.654110675 +0000 UTC m=+310.626392728" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.723570 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" path="/var/lib/kubelet/pods/07fe2a15-a6af-4104-8cc0-6ee9485e0c86/volumes" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.724401 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e1bcfa-21f3-446f-aa5c-38f6abce60b2" path="/var/lib/kubelet/pods/d0e1bcfa-21f3-446f-aa5c-38f6abce60b2/volumes" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.897331 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:02:37 crc kubenswrapper[4869]: E0314 09:02:37.897621 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerName="route-controller-manager" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.897637 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerName="route-controller-manager" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.897740 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fe2a15-a6af-4104-8cc0-6ee9485e0c86" containerName="route-controller-manager" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.898154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.902473 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.902795 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.904138 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.904468 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.904492 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.904479 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.916310 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.997170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.997252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6dl\" (UniqueName: \"kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.997281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:37 crc kubenswrapper[4869]: I0314 09:02:37.997304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.098711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.099123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6dl\" (UniqueName: \"kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.099298 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.099422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.100549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.100650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.115584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.121907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6dl\" (UniqueName: \"kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl\") pod \"route-controller-manager-788455f67b-qqx4h\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.221267 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.648704 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:02:38 crc kubenswrapper[4869]: W0314 09:02:38.661043 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ef7860_c026_45cc_aa7c_4946d59971c9.slice/crio-459aa51b87fa0d88dd755dd83fe129129efb3684499aca001070a8c061090b57 WatchSource:0}: Error finding container 459aa51b87fa0d88dd755dd83fe129129efb3684499aca001070a8c061090b57: Status 404 returned error can't find the container with id 459aa51b87fa0d88dd755dd83fe129129efb3684499aca001070a8c061090b57 Mar 14 09:02:38 crc kubenswrapper[4869]: I0314 09:02:38.998408 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:38.999566 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:38.999860 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc" gracePeriod=15 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:38.999922 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:38.999963 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d" gracePeriod=15 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:38.999992 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069" gracePeriod=15 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.000038 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a" gracePeriod=15 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.000036 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37" gracePeriod=15 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.000877 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001039 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001052 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001059 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001065 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001075 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001080 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001089 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001095 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001106 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001111 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001119 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001125 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001138 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001145 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001155 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001161 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001263 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001275 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001283 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001290 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001298 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001306 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001314 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001320 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001412 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001419 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.001426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001433 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.001538 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.040770 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.113976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114189 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114211 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.114234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215691 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215733 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215906 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.215948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.337945 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:02:39 crc kubenswrapper[4869]: W0314 09:02:39.365345 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5f9b28c5a99a5156520d73c5b1feaa7cf2321b09dc9ae67fd1faae3b1be95c52 WatchSource:0}: Error finding container 5f9b28c5a99a5156520d73c5b1feaa7cf2321b09dc9ae67fd1faae3b1be95c52: Status 404 returned error can't find the container with id 5f9b28c5a99a5156520d73c5b1feaa7cf2321b09dc9ae67fd1faae3b1be95c52 Mar 14 09:02:39 crc kubenswrapper[4869]: E0314 09:02:39.368368 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ca9b79dbc2653 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 09:02:39.367595603 +0000 UTC m=+312.339877656,LastTimestamp:2026-03-14 09:02:39.367595603 +0000 UTC m=+312.339877656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.578968 4869 generic.go:334] "Generic (PLEG): container finished" podID="60550272-ba92-4d24-b14e-ffd342a86579" containerID="88a3503b96056dadeedef17e98871e82396194723758dc525d669ef27e66dcdb" exitCode=0 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.579147 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60550272-ba92-4d24-b14e-ffd342a86579","Type":"ContainerDied","Data":"88a3503b96056dadeedef17e98871e82396194723758dc525d669ef27e66dcdb"} Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.580146 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.580538 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.580946 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.581624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerStarted","Data":"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6"} Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.581684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerStarted","Data":"459aa51b87fa0d88dd755dd83fe129129efb3684499aca001070a8c061090b57"} Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.582138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.582872 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.583576 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.583846 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.584104 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.584441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5f9b28c5a99a5156520d73c5b1feaa7cf2321b09dc9ae67fd1faae3b1be95c52"} Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.588225 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.592035 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.593141 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d" exitCode=0 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.593281 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a" exitCode=0 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.593388 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069" exitCode=0 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.593494 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37" exitCode=2 Mar 14 09:02:39 crc kubenswrapper[4869]: I0314 09:02:39.594606 4869 scope.go:117] "RemoveContainer" containerID="17135f49975472abcb3eacb2c9a6421f35c3b3ecb48ee364be0b69e80b2264bc" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.233735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.234277 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.582917 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.583055 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.605099 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.607691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d"} Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.609326 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.609913 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.610474 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.681873 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.683099 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.887342 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.888193 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.888821 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.889409 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.940249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir\") pod \"60550272-ba92-4d24-b14e-ffd342a86579\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.940817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access\") pod \"60550272-ba92-4d24-b14e-ffd342a86579\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.940415 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60550272-ba92-4d24-b14e-ffd342a86579" (UID: "60550272-ba92-4d24-b14e-ffd342a86579"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.940880 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock\") pod \"60550272-ba92-4d24-b14e-ffd342a86579\" (UID: \"60550272-ba92-4d24-b14e-ffd342a86579\") " Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.941021 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock" (OuterVolumeSpecName: "var-lock") pod "60550272-ba92-4d24-b14e-ffd342a86579" (UID: "60550272-ba92-4d24-b14e-ffd342a86579"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.941208 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.941226 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60550272-ba92-4d24-b14e-ffd342a86579-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:40 crc kubenswrapper[4869]: I0314 09:02:40.950041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60550272-ba92-4d24-b14e-ffd342a86579" (UID: "60550272-ba92-4d24-b14e-ffd342a86579"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.042253 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60550272-ba92-4d24-b14e-ffd342a86579-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.285118 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wt8jx" podUID="25990a28-3536-4602-9439-666774908da0" containerName="registry-server" probeResult="failure" output=< Mar 14 09:02:41 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:02:41 crc kubenswrapper[4869]: > Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.608246 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.608710 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.615327 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.615459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"60550272-ba92-4d24-b14e-ffd342a86579","Type":"ContainerDied","Data":"065f8467458db277eac17a8cbe3cb385bdd97f8b5f5bda954bd8c605d37ef4e8"} Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.615546 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065f8467458db277eac17a8cbe3cb385bdd97f8b5f5bda954bd8c605d37ef4e8" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.627063 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.627686 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.628312 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.733930 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sfjjg" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="registry-server" probeResult="failure" output=< Mar 14 09:02:41 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:02:41 crc kubenswrapper[4869]: > Mar 14 09:02:41 crc kubenswrapper[4869]: I0314 09:02:41.995985 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.002955 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.003688 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.004241 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.004499 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.004706 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.055701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.055776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.055857 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.055901 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.055934 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.056225 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.056568 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.056601 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.056613 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.641747 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.642846 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc" exitCode=0 Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.642916 4869 scope.go:117] "RemoveContainer" containerID="dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.643002 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.659690 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.660004 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.662491 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.662771 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.663950 4869 scope.go:117] "RemoveContainer" containerID="b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.682623 4869 scope.go:117] "RemoveContainer" containerID="c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.705004 4869 scope.go:117] "RemoveContainer" containerID="a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.733337 4869 scope.go:117] "RemoveContainer" containerID="5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.755427 4869 scope.go:117] "RemoveContainer" containerID="3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.790287 4869 scope.go:117] "RemoveContainer" containerID="dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.791421 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\": container with ID starting with dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d not found: ID does not exist" containerID="dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.791457 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d"} err="failed to get container status \"dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\": rpc error: code = NotFound desc = could not find container \"dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d\": container with ID starting with dbb8c0f13fcd91decd3e1e0d2e0a4f9880f1af743cb5921a8d9d91ca4dbb2f0d not found: ID does not exist" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.791482 4869 scope.go:117] "RemoveContainer" containerID="b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.791951 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\": container with ID starting with b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a not found: ID does not exist" containerID="b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.791979 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a"} err="failed to get container status \"b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\": rpc error: code = NotFound desc = could not find container \"b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a\": container with ID starting with b62c44f99e53957c9ce2a40fb97bda31804bb932adfbd79d8d522f7d7a0a6e3a not found: ID does not exist" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.791996 4869 scope.go:117] "RemoveContainer" containerID="c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.792405 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\": container with ID starting with c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069 not found: ID does not exist" containerID="c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.792425 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069"} err="failed to get container status \"c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\": rpc error: code = NotFound desc = could not find container \"c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069\": container with ID starting with c627bf53a4d8270e12a5608df609432beb1e1160d4003435ef96aa35d9ad6069 not found: ID does not exist" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.792438 4869 scope.go:117] "RemoveContainer" containerID="a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.792706 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\": container with ID starting with a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37 not found: ID does not exist" containerID="a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.792725 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37"} err="failed to get container status \"a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\": rpc error: code = NotFound desc = could not find container \"a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37\": container with ID starting with a826b660a1edeadca116d3e80e6f16bc79866710f2e248cff095165e1bf18c37 not found: ID does not exist" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.792737 4869 scope.go:117] "RemoveContainer" containerID="5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.792978 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\": container with ID starting with 5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc not found: ID does not exist" containerID="5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.792998 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc"} err="failed to get container status \"5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\": rpc error: code = NotFound desc = could not find container \"5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc\": container with ID starting with 5358925deae8ebdb778a4a698009b207563e9fb7892331b5b5653d2206311fbc not found: ID does not exist" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.793012 4869 scope.go:117] "RemoveContainer" containerID="3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45" Mar 14 09:02:42 crc kubenswrapper[4869]: E0314 09:02:42.793320 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\": container with ID starting with 3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45 not found: ID does not exist" containerID="3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45" Mar 14 09:02:42 crc kubenswrapper[4869]: I0314 09:02:42.793339 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45"} err="failed to get container status \"3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\": rpc error: code = NotFound desc = could not find container \"3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45\": container with ID starting with 3b4738d56cb839a05a40a7c32cc2c72f10041f1ccca0dc031ee200e3d755ab45 not found: ID does not exist" Mar 14 09:02:43 crc kubenswrapper[4869]: I0314 09:02:43.713196 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.224140 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.224606 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.225073 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.225425 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.225817 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:45 crc kubenswrapper[4869]: I0314 09:02:45.225852 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.226112 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="200ms" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.427550 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="400ms" Mar 14 09:02:45 crc kubenswrapper[4869]: E0314 09:02:45.830102 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="800ms" Mar 14 09:02:46 crc kubenswrapper[4869]: E0314 09:02:46.631953 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="1.6s" Mar 14 09:02:47 crc kubenswrapper[4869]: I0314 09:02:47.706985 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:47 crc kubenswrapper[4869]: I0314 09:02:47.708272 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:47 crc kubenswrapper[4869]: I0314 09:02:47.708681 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:48 crc kubenswrapper[4869]: E0314 09:02:48.233244 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="3.2s" Mar 14 09:02:48 crc kubenswrapper[4869]: I0314 09:02:48.703480 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" containerID="cri-o://9bbfd92af0bceb71cf99da603f13f2ac57873eeb70ce3e11de8a03402b255d22" gracePeriod=15 Mar 14 09:02:48 crc kubenswrapper[4869]: E0314 09:02:48.838045 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ca9b79dbc2653 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-14 09:02:39.367595603 +0000 UTC m=+312.339877656,LastTimestamp:2026-03-14 09:02:39.367595603 +0000 UTC m=+312.339877656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.221780 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.221939 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.707954 4869 generic.go:334] "Generic (PLEG): container finished" podID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerID="9bbfd92af0bceb71cf99da603f13f2ac57873eeb70ce3e11de8a03402b255d22" exitCode=0 Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.717137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" event={"ID":"6d3f7d57-086d-45b5-8b44-c749f1a13821","Type":"ContainerDied","Data":"9bbfd92af0bceb71cf99da603f13f2ac57873eeb70ce3e11de8a03402b255d22"} Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.750690 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751401 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751482 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.751498 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.752022 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.752215 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.752379 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.752441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.752685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.753024 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.761194 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.762260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.852853 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.852981 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853050 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnh5b\" (UniqueName: \"kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853128 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies\") pod \"6d3f7d57-086d-45b5-8b44-c749f1a13821\" (UID: \"6d3f7d57-086d-45b5-8b44-c749f1a13821\") " Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853694 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853721 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853744 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853765 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.853787 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.854745 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.857049 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.857072 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.858003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.858720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b" (OuterVolumeSpecName: "kube-api-access-qnh5b") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "kube-api-access-qnh5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.859319 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.860172 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.861297 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.862771 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6d3f7d57-086d-45b5-8b44-c749f1a13821" (UID: "6d3f7d57-086d-45b5-8b44-c749f1a13821"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955650 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955731 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955754 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955776 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955800 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955824 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955853 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955873 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnh5b\" (UniqueName: \"kubernetes.io/projected/6d3f7d57-086d-45b5-8b44-c749f1a13821-kube-api-access-qnh5b\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:49 crc kubenswrapper[4869]: I0314 09:02:49.955894 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d3f7d57-086d-45b5-8b44-c749f1a13821-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.301268 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.302212 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.302793 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.303155 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.303687 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.304166 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.346624 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.347291 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.347967 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.348353 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.348674 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.348975 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.703755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.704975 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.705466 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.706003 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.706456 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.706962 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.732244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" event={"ID":"6d3f7d57-086d-45b5-8b44-c749f1a13821","Type":"ContainerDied","Data":"6b6eca8bde35ce621bc2f320fe68255d1057c2dad0abc096041e2b91b9f88a50"} Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.732292 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.732336 4869 scope.go:117] "RemoveContainer" containerID="9bbfd92af0bceb71cf99da603f13f2ac57873eeb70ce3e11de8a03402b255d22" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.733471 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.733844 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.734991 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.735020 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:50 crc kubenswrapper[4869]: E0314 09:02:50.735290 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.735740 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.736234 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.736755 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.737366 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.754186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.754662 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.754871 4869 status_manager.go:851] "Failed to get status for pod" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" pod="openshift-marketplace/redhat-operators-sfjjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sfjjg\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.755173 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.755654 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.756253 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.756590 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.795491 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.795986 4869 status_manager.go:851] "Failed to get status for pod" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" pod="openshift-marketplace/redhat-operators-sfjjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sfjjg\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.796245 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.796442 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.796880 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.797138 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.801925 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.802364 4869 status_manager.go:851] "Failed to get status for pod" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" pod="openshift-marketplace/redhat-operators-sfjjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sfjjg\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.802732 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.803142 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.803426 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.803735 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:50 crc kubenswrapper[4869]: I0314 09:02:50.803947 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: E0314 09:02:51.435449 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="6.4s" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.743856 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="de35fa8513afba4b9dab608f960d9a59b90cc8a849a0f115b037d8111fec614a" exitCode=0 Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.744015 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"de35fa8513afba4b9dab608f960d9a59b90cc8a849a0f115b037d8111fec614a"} Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.744131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4628ab2020df7f3cedbdc5eb80d8a2eec30e5c2ea9a5005cd094e15ebeba2e41"} Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.744611 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.744651 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.745255 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: E0314 09:02:51.745275 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.745592 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-788455f67b-qqx4h\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.745973 4869 status_manager.go:851] "Failed to get status for pod" podUID="60550272-ba92-4d24-b14e-ffd342a86579" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.746232 4869 status_manager.go:851] "Failed to get status for pod" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" pod="openshift-marketplace/redhat-operators-sfjjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-sfjjg\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.746441 4869 status_manager.go:851] "Failed to get status for pod" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" pod="openshift-authentication/oauth-openshift-558db77b4-c25vk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c25vk\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:51 crc kubenswrapper[4869]: I0314 09:02:51.746754 4869 status_manager.go:851] "Failed to get status for pod" podUID="25990a28-3536-4602-9439-666774908da0" pod="openshift-marketplace/redhat-operators-wt8jx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wt8jx\": dial tcp 38.102.83.148:6443: connect: connection refused" Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.754391 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.759781 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.759844 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69" exitCode=1 Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.759943 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69"} Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.760663 4869 scope.go:117] "RemoveContainer" containerID="f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69" Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.763799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"900ca17d5795a570587839e3ded7a95f42e84da1b19f45dc0e65f481ea3e3ed6"} Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.763835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"751c49216d62a174e09f19805e9a12469eb443a871c0fb15e8c0a30fdefff983"} Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.763846 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"de8e588d43ae0a15da66d01e34e8bd5179fb74467d81cd95854d69fd1d9e12ae"} Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.763855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"793c72da0f5cfda8e237007093542b63f11d6331240cc6746cf53a7ebdb67dbd"} Mar 14 09:02:52 crc kubenswrapper[4869]: I0314 09:02:52.885150 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.774029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0e6d62ed2af016a528b35cdcc684d6076b07e1a03b3b919b71871b1c897309b1"} Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.774348 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.774383 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.774428 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.777970 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.779282 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 09:02:53 crc kubenswrapper[4869]: I0314 09:02:53.779344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91f93e9e1af35bd4c5186b0d43fd80282b6e9014460f39e2aeea42d9254b7f7d"} Mar 14 09:02:55 crc kubenswrapper[4869]: I0314 09:02:55.737162 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:55 crc kubenswrapper[4869]: I0314 09:02:55.737810 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:55 crc kubenswrapper[4869]: I0314 09:02:55.747400 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:58 crc kubenswrapper[4869]: I0314 09:02:58.782318 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:58 crc kubenswrapper[4869]: I0314 09:02:58.811153 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:58 crc kubenswrapper[4869]: I0314 09:02:58.811183 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:58 crc kubenswrapper[4869]: I0314 09:02:58.815037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:02:58 crc kubenswrapper[4869]: I0314 09:02:58.818104 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a2bcc535-0c44-4da1-ae9b-7b4180db8601" Mar 14 09:02:59 crc kubenswrapper[4869]: I0314 09:02:59.222628 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:02:59 crc kubenswrapper[4869]: I0314 09:02:59.222721 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:02:59 crc kubenswrapper[4869]: I0314 09:02:59.716105 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:02:59 crc kubenswrapper[4869]: I0314 09:02:59.816991 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:02:59 crc kubenswrapper[4869]: I0314 09:02:59.817035 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:03:02 crc kubenswrapper[4869]: I0314 09:03:02.885537 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:03:02 crc kubenswrapper[4869]: I0314 09:03:02.886243 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 09:03:02 crc kubenswrapper[4869]: I0314 09:03:02.886323 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 09:03:07 crc kubenswrapper[4869]: I0314 09:03:07.728939 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a2bcc535-0c44-4da1-ae9b-7b4180db8601" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.222873 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.222943 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.223022 4869 patch_prober.go:28] interesting pod/route-controller-manager-788455f67b-qqx4h container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.223156 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.372397 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.819638 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.840869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.886348 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-788455f67b-qqx4h_a3ef7860-c026-45cc-aa7c-4946d59971c9/route-controller-manager/0.log" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.886406 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerID="5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6" exitCode=255 Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.886449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerDied","Data":"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6"} Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.887182 4869 scope.go:117] "RemoveContainer" containerID="5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6" Mar 14 09:03:09 crc kubenswrapper[4869]: I0314 09:03:09.930902 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.143056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.256466 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.678662 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.821360 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.897591 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-788455f67b-qqx4h_a3ef7860-c026-45cc-aa7c-4946d59971c9/route-controller-manager/0.log" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.897697 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerStarted","Data":"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930"} Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.898300 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.954019 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 14 09:03:10 crc kubenswrapper[4869]: I0314 09:03:10.990062 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 14 09:03:11 crc kubenswrapper[4869]: I0314 09:03:11.159151 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 14 09:03:11 crc kubenswrapper[4869]: I0314 09:03:11.296380 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 14 09:03:11 crc kubenswrapper[4869]: I0314 09:03:11.361638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:03:11 crc kubenswrapper[4869]: I0314 09:03:11.533121 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 14 09:03:11 crc kubenswrapper[4869]: I0314 09:03:11.624497 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.037256 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.067869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.236913 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.270772 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.483810 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.536403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.712818 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.777500 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.782922 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podStartSLOduration=38.782891718 podStartE2EDuration="38.782891718s" podCreationTimestamp="2026-03-14 09:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:58.457705328 +0000 UTC m=+331.429987381" watchObservedRunningTime="2026-03-14 09:03:12.782891718 +0000 UTC m=+345.755173791" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.783421 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=33.783414812 podStartE2EDuration="33.783414812s" podCreationTimestamp="2026-03-14 09:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:02:58.435955451 +0000 UTC m=+331.408237504" watchObservedRunningTime="2026-03-14 09:03:12.783414812 +0000 UTC m=+345.755696885" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.784363 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-c25vk"] Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.784566 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.785195 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.785240 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ed63d38-6eaf-4a6b-90f2-33571f319b1b" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.790007 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.794661 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.809620 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.80959055 podStartE2EDuration="14.80959055s" podCreationTimestamp="2026-03-14 09:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:12.808252516 +0000 UTC m=+345.780534599" watchObservedRunningTime="2026-03-14 09:03:12.80959055 +0000 UTC m=+345.781872643" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.886656 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.886747 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 09:03:12 crc kubenswrapper[4869]: I0314 09:03:12.922856 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.078962 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.097124 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.117899 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.293248 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.369048 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.409161 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.458550 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.537938 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.554979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.609311 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.647620 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.689067 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.690468 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.711814 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" path="/var/lib/kubelet/pods/6d3f7d57-086d-45b5-8b44-c749f1a13821/volumes" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.873703 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.911280 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.949279 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 14 09:03:13 crc kubenswrapper[4869]: I0314 09:03:13.991201 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.076615 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.172597 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.233005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.450109 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.511822 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.553026 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.650432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.664606 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.681537 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.681562 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.773305 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.976543 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 14 09:03:14 crc kubenswrapper[4869]: I0314 09:03:14.988428 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.063338 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.075719 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.080596 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.116584 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.119712 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.121024 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.129249 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.241583 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.247711 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.304121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.350057 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.514381 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.530183 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.677146 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.828383 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 14 09:03:15 crc kubenswrapper[4869]: I0314 09:03:15.925487 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.029994 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.054806 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.063556 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.114207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.182746 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.202808 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.314183 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.399552 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.420117 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.439046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.446726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.484768 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.569814 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.615351 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.622094 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.664931 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.731659 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.758483 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.801544 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.929007 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.929452 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 14 09:03:16 crc kubenswrapper[4869]: I0314 09:03:16.983351 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.001116 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.019126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.109436 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.111154 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.169349 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.230083 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.266861 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.345367 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.399550 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.417643 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.423519 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.461079 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.524691 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.549780 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.568395 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.601965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.814268 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.838014 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.933252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 14 09:03:17 crc kubenswrapper[4869]: I0314 09:03:17.985636 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.011561 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.133055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.154598 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.213848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.235645 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.239127 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.278458 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.309805 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.311435 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.330250 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.358995 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.366056 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.368432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.376140 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.377679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.404378 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.515614 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.567777 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.629174 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.635607 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.640324 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.679146 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.793361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.843965 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 14 09:03:18 crc kubenswrapper[4869]: I0314 09:03:18.867168 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.120915 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.146605 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.162744 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.200769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.210237 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.248170 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.298441 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.374873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.377043 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.383886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.423950 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.477102 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.505276 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.587052 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.598819 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.965263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 14 09:03:19 crc kubenswrapper[4869]: I0314 09:03:19.975813 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.100082 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.117646 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.267738 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.273098 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.325857 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.401646 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.439848 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.494691 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.510185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.536313 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.538553 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.570694 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.801626 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.802605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.883534 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.909316 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.963687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 14 09:03:20 crc kubenswrapper[4869]: I0314 09:03:20.966631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.009788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.050885 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.098018 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.098068 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.098313 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d" gracePeriod=5 Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.192150 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.298498 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.361046 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.370956 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.374877 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.471156 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.499962 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.591712 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.609249 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.627321 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.646141 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.735687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.755342 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.761133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.898688 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 14 09:03:21 crc kubenswrapper[4869]: I0314 09:03:21.956918 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.075283 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.098361 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.220307 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.246660 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.287185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.312059 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.350655 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.376553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.382326 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.497969 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.507862 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.566129 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.603978 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.650907 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.751778 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.808747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.852649 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.885961 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.886037 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.886106 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.887195 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"91f93e9e1af35bd4c5186b0d43fd80282b6e9014460f39e2aeea42d9254b7f7d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.887376 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://91f93e9e1af35bd4c5186b0d43fd80282b6e9014460f39e2aeea42d9254b7f7d" gracePeriod=30 Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.921783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.932878 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 14 09:03:22 crc kubenswrapper[4869]: I0314 09:03:22.958728 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.041211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.079913 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.116774 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.223709 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.367647 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.404980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.585998 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 14 09:03:23 crc kubenswrapper[4869]: I0314 09:03:23.928944 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.003815 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c594ff4d6-tldf9"] Mar 14 09:03:24 crc kubenswrapper[4869]: E0314 09:03:24.004193 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004212 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:03:24 crc kubenswrapper[4869]: E0314 09:03:24.004226 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004235 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" Mar 14 09:03:24 crc kubenswrapper[4869]: E0314 09:03:24.004251 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60550272-ba92-4d24-b14e-ffd342a86579" containerName="installer" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004260 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60550272-ba92-4d24-b14e-ffd342a86579" containerName="installer" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004403 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004430 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3f7d57-086d-45b5-8b44-c749f1a13821" containerName="oauth-openshift" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.004441 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60550272-ba92-4d24-b14e-ffd342a86579" containerName="installer" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.005051 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.009186 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.009275 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.009200 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.010124 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.011654 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.012076 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.012342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.012660 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.012792 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.012957 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.013282 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.022780 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c594ff4d6-tldf9"] Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.022951 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.023215 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.026298 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.043142 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-policies\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-dir\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050715 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-error\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050826 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-login\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.050976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7b2d\" (UniqueName: \"kubernetes.io/projected/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-kube-api-access-n7b2d\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-session\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.051372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.076015 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.153771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-policies\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.154836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-dir\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.154761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-policies\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.154946 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.154999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-error\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.155027 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.155127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-audit-dir\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156488 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-login\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7b2d\" (UniqueName: \"kubernetes.io/projected/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-kube-api-access-n7b2d\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156614 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156652 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-session\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.156705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.157288 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.157784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.158157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.162528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-login\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.162593 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.163194 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.173897 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-template-error\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.174150 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-session\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.174432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.174458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.175483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.177896 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7b2d\" (UniqueName: \"kubernetes.io/projected/5d3a1baf-66e7-4fee-8c89-0c652dbf7684-kube-api-access-n7b2d\") pod \"oauth-openshift-7c594ff4d6-tldf9\" (UID: \"5d3a1baf-66e7-4fee-8c89-0c652dbf7684\") " pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.252568 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.309658 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.328040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.548143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.739655 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c594ff4d6-tldf9"] Mar 14 09:03:24 crc kubenswrapper[4869]: I0314 09:03:24.879748 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.006033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" event={"ID":"5d3a1baf-66e7-4fee-8c89-0c652dbf7684","Type":"ContainerStarted","Data":"c1f5efb6431bb898342d78c3f2f96ce05300ba462dbf809c9b95a225c4c2b5f4"} Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.047449 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.237800 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.294752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.303055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.471084 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 14 09:03:25 crc kubenswrapper[4869]: I0314 09:03:25.773252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.012875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" event={"ID":"5d3a1baf-66e7-4fee-8c89-0c652dbf7684","Type":"ContainerStarted","Data":"97f979da3132418d47e6206e06650b133bcd260cb8d6955929efd2efa3e8dade"} Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.013381 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.020388 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.051054 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.070744 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c594ff4d6-tldf9" podStartSLOduration=63.070714162 podStartE2EDuration="1m3.070714162s" podCreationTimestamp="2026-03-14 09:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:03:26.045848146 +0000 UTC m=+359.018130219" watchObservedRunningTime="2026-03-14 09:03:26.070714162 +0000 UTC m=+359.042996215" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.090197 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.475349 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.716543 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.716640 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.860429 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.891721 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894384 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894404 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894467 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894488 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894565 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894860 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894879 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894889 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.894901 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.906971 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:03:26 crc kubenswrapper[4869]: I0314 09:03:26.996105 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.023748 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.023806 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d" exitCode=137 Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.024665 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.026655 4869 scope.go:117] "RemoveContainer" containerID="3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.051838 4869 scope.go:117] "RemoveContainer" containerID="3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d" Mar 14 09:03:27 crc kubenswrapper[4869]: E0314 09:03:27.052394 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d\": container with ID starting with 3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d not found: ID does not exist" containerID="3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.052449 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d"} err="failed to get container status \"3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d\": rpc error: code = NotFound desc = could not find container \"3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d\": container with ID starting with 3ac81bef25b123f916ee4c7629788a35f184e045baed87e9db096452963abb1d not found: ID does not exist" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.711613 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.711988 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.727735 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.727816 4869 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="08130bdd-cdfb-4c65-b194-8ee9519e71e5" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.731623 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.731657 4869 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="08130bdd-cdfb-4c65-b194-8ee9519e71e5" Mar 14 09:03:27 crc kubenswrapper[4869]: I0314 09:03:27.816359 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 14 09:03:28 crc kubenswrapper[4869]: I0314 09:03:28.018476 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.390252 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.391395 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sfjjg" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="registry-server" containerID="cri-o://7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511" gracePeriod=2 Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.783221 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.981471 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content\") pod \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.982089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tqlh\" (UniqueName: \"kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh\") pod \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.982232 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities\") pod \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\" (UID: \"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4\") " Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.983134 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities" (OuterVolumeSpecName: "utilities") pod "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" (UID: "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:03:48 crc kubenswrapper[4869]: I0314 09:03:48.990268 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh" (OuterVolumeSpecName: "kube-api-access-8tqlh") pod "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" (UID: "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4"). InnerVolumeSpecName "kube-api-access-8tqlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.083435 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.083475 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tqlh\" (UniqueName: \"kubernetes.io/projected/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-kube-api-access-8tqlh\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.114912 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" (UID: "49ae5a4f-b968-45b6-8f1a-2a96b7af34b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.171411 4869 generic.go:334] "Generic (PLEG): container finished" podID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerID="7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511" exitCode=0 Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.171498 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfjjg" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.171499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerDied","Data":"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511"} Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.171619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfjjg" event={"ID":"49ae5a4f-b968-45b6-8f1a-2a96b7af34b4","Type":"ContainerDied","Data":"a66c14ca01eccfdb511f4d2ac077a8e1d2afbbc9941be8b118f4665d83136a76"} Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.171647 4869 scope.go:117] "RemoveContainer" containerID="7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.184432 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.200747 4869 scope.go:117] "RemoveContainer" containerID="0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.210472 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.214419 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sfjjg"] Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.227303 4869 scope.go:117] "RemoveContainer" containerID="cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.247031 4869 scope.go:117] "RemoveContainer" containerID="7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511" Mar 14 09:03:49 crc kubenswrapper[4869]: E0314 09:03:49.247991 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511\": container with ID starting with 7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511 not found: ID does not exist" containerID="7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.248052 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511"} err="failed to get container status \"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511\": rpc error: code = NotFound desc = could not find container \"7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511\": container with ID starting with 7b7d5e25bcd39c55cdb4201b67a0e632f769c5084d089971ace2a353272c9511 not found: ID does not exist" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.248088 4869 scope.go:117] "RemoveContainer" containerID="0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620" Mar 14 09:03:49 crc kubenswrapper[4869]: E0314 09:03:49.248689 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620\": container with ID starting with 0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620 not found: ID does not exist" containerID="0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.248763 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620"} err="failed to get container status \"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620\": rpc error: code = NotFound desc = could not find container \"0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620\": container with ID starting with 0741072baa4fb85286aec38dbe22a20e3e2f995497295410e7b70b1fa345b620 not found: ID does not exist" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.248814 4869 scope.go:117] "RemoveContainer" containerID="cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b" Mar 14 09:03:49 crc kubenswrapper[4869]: E0314 09:03:49.249732 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b\": container with ID starting with cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b not found: ID does not exist" containerID="cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.249762 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b"} err="failed to get container status \"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b\": rpc error: code = NotFound desc = could not find container \"cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b\": container with ID starting with cdfec1344defd492d7994f0e189ade612ef7eb230ef7876f9e997f751da2006b not found: ID does not exist" Mar 14 09:03:49 crc kubenswrapper[4869]: I0314 09:03:49.712544 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" path="/var/lib/kubelet/pods/49ae5a4f-b968-45b6-8f1a-2a96b7af34b4/volumes" Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.197438 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.200568 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.202227 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.202299 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="91f93e9e1af35bd4c5186b0d43fd80282b6e9014460f39e2aeea42d9254b7f7d" exitCode=137 Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.202340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"91f93e9e1af35bd4c5186b0d43fd80282b6e9014460f39e2aeea42d9254b7f7d"} Mar 14 09:03:53 crc kubenswrapper[4869]: I0314 09:03:53.202387 4869 scope.go:117] "RemoveContainer" containerID="f8dfdf89c5eabafa0eb7699f688cd19b618d9dec88a291eaa255669ef5cb5e69" Mar 14 09:03:54 crc kubenswrapper[4869]: I0314 09:03:54.217401 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 14 09:03:54 crc kubenswrapper[4869]: I0314 09:03:54.218540 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 14 09:03:54 crc kubenswrapper[4869]: I0314 09:03:54.219099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c9081713553220b983301295eb323a872e111653355353e1dc242493bea00653"} Mar 14 09:03:59 crc kubenswrapper[4869]: I0314 09:03:59.709951 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:04:02 crc kubenswrapper[4869]: I0314 09:04:02.885923 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:04:02 crc kubenswrapper[4869]: I0314 09:04:02.894916 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:04:03 crc kubenswrapper[4869]: I0314 09:04:03.290181 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.226238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557984-q2lw7"] Mar 14 09:04:13 crc kubenswrapper[4869]: E0314 09:04:13.229441 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="extract-utilities" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.229584 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="extract-utilities" Mar 14 09:04:13 crc kubenswrapper[4869]: E0314 09:04:13.229669 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="extract-content" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.229749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="extract-content" Mar 14 09:04:13 crc kubenswrapper[4869]: E0314 09:04:13.229835 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="registry-server" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.229895 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="registry-server" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.230075 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ae5a4f-b968-45b6-8f1a-2a96b7af34b4" containerName="registry-server" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.230616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.236073 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.236860 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.245367 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-q2lw7"] Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.250335 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.295303 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.295620 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" podUID="acef2516-cdf2-4e58-bae8-00290015b684" containerName="controller-manager" containerID="cri-o://fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00" gracePeriod=30 Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.427305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq59b\" (UniqueName: \"kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b\") pod \"auto-csr-approver-29557984-q2lw7\" (UID: \"ce16cfb8-2f11-464c-8fe8-84be308a6131\") " pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.472110 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.472395 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" containerID="cri-o://9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930" gracePeriod=30 Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.528643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq59b\" (UniqueName: \"kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b\") pod \"auto-csr-approver-29557984-q2lw7\" (UID: \"ce16cfb8-2f11-464c-8fe8-84be308a6131\") " pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.559566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq59b\" (UniqueName: \"kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b\") pod \"auto-csr-approver-29557984-q2lw7\" (UID: \"ce16cfb8-2f11-464c-8fe8-84be308a6131\") " pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.561764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:13 crc kubenswrapper[4869]: I0314 09:04:13.955681 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.069082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert\") pod \"acef2516-cdf2-4e58-bae8-00290015b684\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.069215 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqlwf\" (UniqueName: \"kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf\") pod \"acef2516-cdf2-4e58-bae8-00290015b684\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.069288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config\") pod \"acef2516-cdf2-4e58-bae8-00290015b684\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.069314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca\") pod \"acef2516-cdf2-4e58-bae8-00290015b684\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.069346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles\") pod \"acef2516-cdf2-4e58-bae8-00290015b684\" (UID: \"acef2516-cdf2-4e58-bae8-00290015b684\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.070405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "acef2516-cdf2-4e58-bae8-00290015b684" (UID: "acef2516-cdf2-4e58-bae8-00290015b684"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.070859 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca" (OuterVolumeSpecName: "client-ca") pod "acef2516-cdf2-4e58-bae8-00290015b684" (UID: "acef2516-cdf2-4e58-bae8-00290015b684"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.070970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config" (OuterVolumeSpecName: "config") pod "acef2516-cdf2-4e58-bae8-00290015b684" (UID: "acef2516-cdf2-4e58-bae8-00290015b684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.079784 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf" (OuterVolumeSpecName: "kube-api-access-jqlwf") pod "acef2516-cdf2-4e58-bae8-00290015b684" (UID: "acef2516-cdf2-4e58-bae8-00290015b684"). InnerVolumeSpecName "kube-api-access-jqlwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.081876 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acef2516-cdf2-4e58-bae8-00290015b684" (UID: "acef2516-cdf2-4e58-bae8-00290015b684"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.092352 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-788455f67b-qqx4h_a3ef7860-c026-45cc-aa7c-4946d59971c9/route-controller-manager/0.log" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.092433 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.141164 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-q2lw7"] Mar 14 09:04:14 crc kubenswrapper[4869]: W0314 09:04:14.148072 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce16cfb8_2f11_464c_8fe8_84be308a6131.slice/crio-5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352 WatchSource:0}: Error finding container 5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352: Status 404 returned error can't find the container with id 5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352 Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.172020 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqlwf\" (UniqueName: \"kubernetes.io/projected/acef2516-cdf2-4e58-bae8-00290015b684-kube-api-access-jqlwf\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.172066 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.172077 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.172088 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acef2516-cdf2-4e58-bae8-00290015b684-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.172096 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acef2516-cdf2-4e58-bae8-00290015b684-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.273497 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6dl\" (UniqueName: \"kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl\") pod \"a3ef7860-c026-45cc-aa7c-4946d59971c9\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.273636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca\") pod \"a3ef7860-c026-45cc-aa7c-4946d59971c9\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.273709 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config\") pod \"a3ef7860-c026-45cc-aa7c-4946d59971c9\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.273747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert\") pod \"a3ef7860-c026-45cc-aa7c-4946d59971c9\" (UID: \"a3ef7860-c026-45cc-aa7c-4946d59971c9\") " Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.274606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "a3ef7860-c026-45cc-aa7c-4946d59971c9" (UID: "a3ef7860-c026-45cc-aa7c-4946d59971c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.274643 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config" (OuterVolumeSpecName: "config") pod "a3ef7860-c026-45cc-aa7c-4946d59971c9" (UID: "a3ef7860-c026-45cc-aa7c-4946d59971c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.276917 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl" (OuterVolumeSpecName: "kube-api-access-gx6dl") pod "a3ef7860-c026-45cc-aa7c-4946d59971c9" (UID: "a3ef7860-c026-45cc-aa7c-4946d59971c9"). InnerVolumeSpecName "kube-api-access-gx6dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277027 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff77cd969-bzqb2"] Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.277296 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acef2516-cdf2-4e58-bae8-00290015b684" containerName="controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277315 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="acef2516-cdf2-4e58-bae8-00290015b684" containerName="controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.277343 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277351 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.277367 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277374 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277533 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="acef2516-cdf2-4e58-bae8-00290015b684" containerName="controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.277545 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.278011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.278722 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3ef7860-c026-45cc-aa7c-4946d59971c9" (UID: "a3ef7860-c026-45cc-aa7c-4946d59971c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.296879 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff77cd969-bzqb2"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.302085 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.302458 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerName="route-controller-manager" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.302897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.316619 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.347709 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-788455f67b-qqx4h_a3ef7860-c026-45cc-aa7c-4946d59971c9/route-controller-manager/0.log" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.347765 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3ef7860-c026-45cc-aa7c-4946d59971c9" containerID="9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930" exitCode=0 Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.347890 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.347982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerDied","Data":"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930"} Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.348041 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h" event={"ID":"a3ef7860-c026-45cc-aa7c-4946d59971c9","Type":"ContainerDied","Data":"459aa51b87fa0d88dd755dd83fe129129efb3684499aca001070a8c061090b57"} Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.348060 4869 scope.go:117] "RemoveContainer" containerID="9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.352150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" event={"ID":"ce16cfb8-2f11-464c-8fe8-84be308a6131","Type":"ContainerStarted","Data":"5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352"} Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.358926 4869 generic.go:334] "Generic (PLEG): container finished" podID="acef2516-cdf2-4e58-bae8-00290015b684" containerID="fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00" exitCode=0 Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.358995 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.358988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" event={"ID":"acef2516-cdf2-4e58-bae8-00290015b684","Type":"ContainerDied","Data":"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00"} Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.359036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c764f6b9-5g8qb" event={"ID":"acef2516-cdf2-4e58-bae8-00290015b684","Type":"ContainerDied","Data":"e929b58d357cbc461a115732f35a6578f5e59b2801b39adbaf3736dda43dd227"} Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-client-ca\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-proxy-ca-bundles\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-config\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374767 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm24x\" (UniqueName: \"kubernetes.io/projected/640da39c-9e06-42f1-8854-f6c8e07e8e8c-kube-api-access-cm24x\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640da39c-9e06-42f1-8854-f6c8e07e8e8c-serving-cert\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374850 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374866 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ef7860-c026-45cc-aa7c-4946d59971c9-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374878 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3ef7860-c026-45cc-aa7c-4946d59971c9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.374889 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6dl\" (UniqueName: \"kubernetes.io/projected/a3ef7860-c026-45cc-aa7c-4946d59971c9-kube-api-access-gx6dl\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.383007 4869 scope.go:117] "RemoveContainer" containerID="5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.391615 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.398216 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788455f67b-qqx4h"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.403246 4869 scope.go:117] "RemoveContainer" containerID="9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930" Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.404524 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930\": container with ID starting with 9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930 not found: ID does not exist" containerID="9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.404650 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930"} err="failed to get container status \"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930\": rpc error: code = NotFound desc = could not find container \"9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930\": container with ID starting with 9bee763be051cb51b290b9a23c806c14a609f1584bcbb37ca4ea250f9340a930 not found: ID does not exist" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.404723 4869 scope.go:117] "RemoveContainer" containerID="5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.404868 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.405203 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6\": container with ID starting with 5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6 not found: ID does not exist" containerID="5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.405228 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6"} err="failed to get container status \"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6\": rpc error: code = NotFound desc = could not find container \"5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6\": container with ID starting with 5ab123b438a8c29577f013a8adfccfc86376063c7030fc3a92f23f0de350dcd6 not found: ID does not exist" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.405242 4869 scope.go:117] "RemoveContainer" containerID="fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.409528 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c764f6b9-5g8qb"] Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.422654 4869 scope.go:117] "RemoveContainer" containerID="fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00" Mar 14 09:04:14 crc kubenswrapper[4869]: E0314 09:04:14.423265 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00\": container with ID starting with fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00 not found: ID does not exist" containerID="fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.423296 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00"} err="failed to get container status \"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00\": rpc error: code = NotFound desc = could not find container \"fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00\": container with ID starting with fe023f3c8a319f4e0a8ab6c91405a09428771d95f6339fd85dfca0dc2d03fd00 not found: ID does not exist" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.475920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-config\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476350 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-client-ca\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476380 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-client-ca\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-proxy-ca-bundles\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-config\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm24x\" (UniqueName: \"kubernetes.io/projected/640da39c-9e06-42f1-8854-f6c8e07e8e8c-kube-api-access-cm24x\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzz9j\" (UniqueName: \"kubernetes.io/projected/037e4f51-aedd-461b-a7f4-71c4085b6645-kube-api-access-jzz9j\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476497 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/037e4f51-aedd-461b-a7f4-71c4085b6645-serving-cert\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.476541 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640da39c-9e06-42f1-8854-f6c8e07e8e8c-serving-cert\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.477598 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-client-ca\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.478356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-config\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.478447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/640da39c-9e06-42f1-8854-f6c8e07e8e8c-proxy-ca-bundles\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.481618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640da39c-9e06-42f1-8854-f6c8e07e8e8c-serving-cert\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.496759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm24x\" (UniqueName: \"kubernetes.io/projected/640da39c-9e06-42f1-8854-f6c8e07e8e8c-kube-api-access-cm24x\") pod \"controller-manager-7ff77cd969-bzqb2\" (UID: \"640da39c-9e06-42f1-8854-f6c8e07e8e8c\") " pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.578417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-config\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.578465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-client-ca\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.578526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzz9j\" (UniqueName: \"kubernetes.io/projected/037e4f51-aedd-461b-a7f4-71c4085b6645-kube-api-access-jzz9j\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.578553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/037e4f51-aedd-461b-a7f4-71c4085b6645-serving-cert\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.579564 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-client-ca\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.579853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037e4f51-aedd-461b-a7f4-71c4085b6645-config\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.581533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/037e4f51-aedd-461b-a7f4-71c4085b6645-serving-cert\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.595580 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzz9j\" (UniqueName: \"kubernetes.io/projected/037e4f51-aedd-461b-a7f4-71c4085b6645-kube-api-access-jzz9j\") pod \"route-controller-manager-d4f4469f4-cnhlg\" (UID: \"037e4f51-aedd-461b-a7f4-71c4085b6645\") " pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.605621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.629001 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.914757 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff77cd969-bzqb2"] Mar 14 09:04:14 crc kubenswrapper[4869]: W0314 09:04:14.925571 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod640da39c_9e06_42f1_8854_f6c8e07e8e8c.slice/crio-eeaf51bec51e66072b45c52b4ddb335facdbf1e3ad8d64374405e40be82030f5 WatchSource:0}: Error finding container eeaf51bec51e66072b45c52b4ddb335facdbf1e3ad8d64374405e40be82030f5: Status 404 returned error can't find the container with id eeaf51bec51e66072b45c52b4ddb335facdbf1e3ad8d64374405e40be82030f5 Mar 14 09:04:14 crc kubenswrapper[4869]: I0314 09:04:14.954617 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg"] Mar 14 09:04:14 crc kubenswrapper[4869]: W0314 09:04:14.968141 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod037e4f51_aedd_461b_a7f4_71c4085b6645.slice/crio-312f0a079b8847f4d75d7f42fb21f6c344599386d93296adbbc7716097700548 WatchSource:0}: Error finding container 312f0a079b8847f4d75d7f42fb21f6c344599386d93296adbbc7716097700548: Status 404 returned error can't find the container with id 312f0a079b8847f4d75d7f42fb21f6c344599386d93296adbbc7716097700548 Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.367584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" event={"ID":"640da39c-9e06-42f1-8854-f6c8e07e8e8c","Type":"ContainerStarted","Data":"ce925f0fc41c42e2ccc06b1c208808dbc3cebb277f58f57c9eab42430dc483e1"} Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.367631 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" event={"ID":"640da39c-9e06-42f1-8854-f6c8e07e8e8c","Type":"ContainerStarted","Data":"eeaf51bec51e66072b45c52b4ddb335facdbf1e3ad8d64374405e40be82030f5"} Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.369215 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.373382 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" event={"ID":"037e4f51-aedd-461b-a7f4-71c4085b6645","Type":"ContainerStarted","Data":"84eeed0ae9fbc0187dc3d4f85fc1aaa250d5e7871276c3341778779cbee2b6ca"} Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.373413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" event={"ID":"037e4f51-aedd-461b-a7f4-71c4085b6645","Type":"ContainerStarted","Data":"312f0a079b8847f4d75d7f42fb21f6c344599386d93296adbbc7716097700548"} Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.373711 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.379840 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.398788 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff77cd969-bzqb2" podStartSLOduration=1.3987323520000001 podStartE2EDuration="1.398732352s" podCreationTimestamp="2026-03-14 09:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:15.391558281 +0000 UTC m=+408.363840354" watchObservedRunningTime="2026-03-14 09:04:15.398732352 +0000 UTC m=+408.371014405" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.589340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.616445 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d4f4469f4-cnhlg" podStartSLOduration=1.6164159040000001 podStartE2EDuration="1.616415904s" podCreationTimestamp="2026-03-14 09:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:15.442201804 +0000 UTC m=+408.414483857" watchObservedRunningTime="2026-03-14 09:04:15.616415904 +0000 UTC m=+408.588697967" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.714652 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ef7860-c026-45cc-aa7c-4946d59971c9" path="/var/lib/kubelet/pods/a3ef7860-c026-45cc-aa7c-4946d59971c9/volumes" Mar 14 09:04:15 crc kubenswrapper[4869]: I0314 09:04:15.715330 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acef2516-cdf2-4e58-bae8-00290015b684" path="/var/lib/kubelet/pods/acef2516-cdf2-4e58-bae8-00290015b684/volumes" Mar 14 09:04:16 crc kubenswrapper[4869]: I0314 09:04:16.383892 4869 generic.go:334] "Generic (PLEG): container finished" podID="ce16cfb8-2f11-464c-8fe8-84be308a6131" containerID="42e3c2e283c8a71d32a8d9404bc6c1c1bed71bede41d6d80b85ca92c9b51909c" exitCode=0 Mar 14 09:04:16 crc kubenswrapper[4869]: I0314 09:04:16.383997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" event={"ID":"ce16cfb8-2f11-464c-8fe8-84be308a6131","Type":"ContainerDied","Data":"42e3c2e283c8a71d32a8d9404bc6c1c1bed71bede41d6d80b85ca92c9b51909c"} Mar 14 09:04:17 crc kubenswrapper[4869]: I0314 09:04:17.709812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:17 crc kubenswrapper[4869]: I0314 09:04:17.852582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq59b\" (UniqueName: \"kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b\") pod \"ce16cfb8-2f11-464c-8fe8-84be308a6131\" (UID: \"ce16cfb8-2f11-464c-8fe8-84be308a6131\") " Mar 14 09:04:17 crc kubenswrapper[4869]: I0314 09:04:17.862086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b" (OuterVolumeSpecName: "kube-api-access-hq59b") pod "ce16cfb8-2f11-464c-8fe8-84be308a6131" (UID: "ce16cfb8-2f11-464c-8fe8-84be308a6131"). InnerVolumeSpecName "kube-api-access-hq59b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:17 crc kubenswrapper[4869]: I0314 09:04:17.953957 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq59b\" (UniqueName: \"kubernetes.io/projected/ce16cfb8-2f11-464c-8fe8-84be308a6131-kube-api-access-hq59b\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:18 crc kubenswrapper[4869]: I0314 09:04:18.401059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" event={"ID":"ce16cfb8-2f11-464c-8fe8-84be308a6131","Type":"ContainerDied","Data":"5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352"} Mar 14 09:04:18 crc kubenswrapper[4869]: I0314 09:04:18.401119 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f6282fdc6473044e2d236658c81dce2a91a729157d712bcaa62f1cbe09d2352" Mar 14 09:04:18 crc kubenswrapper[4869]: I0314 09:04:18.401134 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557984-q2lw7" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.547122 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.548208 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6cz2t" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="registry-server" containerID="cri-o://7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416" gracePeriod=30 Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.561275 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.561738 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94926" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="registry-server" containerID="cri-o://1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235" gracePeriod=30 Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.595467 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.595902 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" containerID="cri-o://3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77" gracePeriod=30 Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.615887 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.617849 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9kv6g" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="registry-server" containerID="cri-o://677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62" gracePeriod=30 Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.625708 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.626133 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wt8jx" podUID="25990a28-3536-4602-9439-666774908da0" containerName="registry-server" containerID="cri-o://1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55" gracePeriod=30 Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.634269 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9twg2"] Mar 14 09:04:34 crc kubenswrapper[4869]: E0314 09:04:34.634616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce16cfb8-2f11-464c-8fe8-84be308a6131" containerName="oc" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.634644 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce16cfb8-2f11-464c-8fe8-84be308a6131" containerName="oc" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.634869 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce16cfb8-2f11-464c-8fe8-84be308a6131" containerName="oc" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.639072 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9twg2"] Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.639192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.792326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.792404 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.792791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687fm\" (UniqueName: \"kubernetes.io/projected/0d2388b0-415d-43ea-9d85-a417297abc29-kube-api-access-687fm\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.894261 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687fm\" (UniqueName: \"kubernetes.io/projected/0d2388b0-415d-43ea-9d85-a417297abc29-kube-api-access-687fm\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.894680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.894727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.895861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.921807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d2388b0-415d-43ea-9d85-a417297abc29-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.925296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687fm\" (UniqueName: \"kubernetes.io/projected/0d2388b0-415d-43ea-9d85-a417297abc29-kube-api-access-687fm\") pod \"marketplace-operator-79b997595-9twg2\" (UID: \"0d2388b0-415d-43ea-9d85-a417297abc29\") " pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:34 crc kubenswrapper[4869]: I0314 09:04:34.986006 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.242169 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.389413 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.394747 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.400562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities\") pod \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.400743 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc2t6\" (UniqueName: \"kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6\") pod \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.400783 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content\") pod \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\" (UID: \"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.405342 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.410330 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6" (OuterVolumeSpecName: "kube-api-access-kc2t6") pod "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" (UID: "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66"). InnerVolumeSpecName "kube-api-access-kc2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.414582 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities" (OuterVolumeSpecName: "utilities") pod "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" (UID: "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.421103 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94926" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.483076 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" (UID: "3b454c3f-60ab-4a89-ab1e-e1e15cf08b66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.501685 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs9v\" (UniqueName: \"kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v\") pod \"7d0b3ce9-3a56-4562-9534-dc512f82474d\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.501998 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") pod \"7d0b3ce9-3a56-4562-9534-dc512f82474d\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities\") pod \"8466d496-2ca4-49f2-96ff-75386b047783\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502406 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9fph\" (UniqueName: \"kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph\") pod \"25990a28-3536-4602-9439-666774908da0\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities\") pod \"25990a28-3536-4602-9439-666774908da0\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502605 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") pod \"7d0b3ce9-3a56-4562-9534-dc512f82474d\" (UID: \"7d0b3ce9-3a56-4562-9534-dc512f82474d\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities\") pod \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502798 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgfx\" (UniqueName: \"kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx\") pod \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj949\" (UniqueName: \"kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949\") pod \"8466d496-2ca4-49f2-96ff-75386b047783\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.502984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content\") pod \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\" (UID: \"40c9b0bd-b30e-470c-bf30-bd55c35e2e84\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503059 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content\") pod \"8466d496-2ca4-49f2-96ff-75386b047783\" (UID: \"8466d496-2ca4-49f2-96ff-75386b047783\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content\") pod \"25990a28-3536-4602-9439-666774908da0\" (UID: \"25990a28-3536-4602-9439-666774908da0\") " Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503681 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities" (OuterVolumeSpecName: "utilities") pod "25990a28-3536-4602-9439-666774908da0" (UID: "25990a28-3536-4602-9439-666774908da0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities" (OuterVolumeSpecName: "utilities") pod "40c9b0bd-b30e-470c-bf30-bd55c35e2e84" (UID: "40c9b0bd-b30e-470c-bf30-bd55c35e2e84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.504101 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7d0b3ce9-3a56-4562-9534-dc512f82474d" (UID: "7d0b3ce9-3a56-4562-9534-dc512f82474d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerDied","Data":"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.520658 4869 scope.go:117] "RemoveContainer" containerID="1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503842 4869 generic.go:334] "Generic (PLEG): container finished" podID="25990a28-3536-4602-9439-666774908da0" containerID="1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55" exitCode=0 Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.503936 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wt8jx" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.505020 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities" (OuterVolumeSpecName: "utilities") pod "8466d496-2ca4-49f2-96ff-75386b047783" (UID: "8466d496-2ca4-49f2-96ff-75386b047783"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.511089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949" (OuterVolumeSpecName: "kube-api-access-jj949") pod "8466d496-2ca4-49f2-96ff-75386b047783" (UID: "8466d496-2ca4-49f2-96ff-75386b047783"). InnerVolumeSpecName "kube-api-access-jj949". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.520882 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wt8jx" event={"ID":"25990a28-3536-4602-9439-666774908da0","Type":"ContainerDied","Data":"482509ce41a7b1d2d4b846b022d0f0dd0021e59aa39d61a07dd8f491b30b6785"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521262 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521344 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521403 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521465 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc2t6\" (UniqueName: \"kubernetes.io/projected/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-kube-api-access-kc2t6\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521543 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.521602 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.528714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph" (OuterVolumeSpecName: "kube-api-access-l9fph") pod "25990a28-3536-4602-9439-666774908da0" (UID: "25990a28-3536-4602-9439-666774908da0"). InnerVolumeSpecName "kube-api-access-l9fph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.529376 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7d0b3ce9-3a56-4562-9534-dc512f82474d" (UID: "7d0b3ce9-3a56-4562-9534-dc512f82474d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.529474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v" (OuterVolumeSpecName: "kube-api-access-nhs9v") pod "7d0b3ce9-3a56-4562-9534-dc512f82474d" (UID: "7d0b3ce9-3a56-4562-9534-dc512f82474d"). InnerVolumeSpecName "kube-api-access-nhs9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.530775 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx" (OuterVolumeSpecName: "kube-api-access-2bgfx") pod "40c9b0bd-b30e-470c-bf30-bd55c35e2e84" (UID: "40c9b0bd-b30e-470c-bf30-bd55c35e2e84"). InnerVolumeSpecName "kube-api-access-2bgfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.531246 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerID="3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77" exitCode=0 Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.531395 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.531388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" event={"ID":"7d0b3ce9-3a56-4562-9534-dc512f82474d","Type":"ContainerDied","Data":"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.531473 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fjgpv" event={"ID":"7d0b3ce9-3a56-4562-9534-dc512f82474d","Type":"ContainerDied","Data":"3e2c93f7d0ab0355d440398462406b5d3376b2ea504710cce3c77298e975b23e"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.535306 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40c9b0bd-b30e-470c-bf30-bd55c35e2e84" (UID: "40c9b0bd-b30e-470c-bf30-bd55c35e2e84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.536192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9twg2"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.542468 4869 generic.go:334] "Generic (PLEG): container finished" podID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerID="677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62" exitCode=0 Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.542619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerDied","Data":"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.542688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kv6g" event={"ID":"40c9b0bd-b30e-470c-bf30-bd55c35e2e84","Type":"ContainerDied","Data":"04a797ef74df3d2682e1c4a2f9c00b8970dfa10018ddd3a48b41d041e0258fb7"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.542697 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kv6g" Mar 14 09:04:35 crc kubenswrapper[4869]: W0314 09:04:35.546975 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2388b0_415d_43ea_9d85_a417297abc29.slice/crio-20dc34b603287bffd9ebfc9fd12ac90503f580cda67c80d916b0b6e73d0c081f WatchSource:0}: Error finding container 20dc34b603287bffd9ebfc9fd12ac90503f580cda67c80d916b0b6e73d0c081f: Status 404 returned error can't find the container with id 20dc34b603287bffd9ebfc9fd12ac90503f580cda67c80d916b0b6e73d0c081f Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.547096 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerID="7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416" exitCode=0 Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.547252 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerDied","Data":"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.547425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6cz2t" event={"ID":"3b454c3f-60ab-4a89-ab1e-e1e15cf08b66","Type":"ContainerDied","Data":"7fb07d1341ad1dfdc0327b306bff847913b9c74ca28aa6bc8806b4b887a34f9f"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.547290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6cz2t" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.553671 4869 generic.go:334] "Generic (PLEG): container finished" podID="8466d496-2ca4-49f2-96ff-75386b047783" containerID="1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235" exitCode=0 Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.554118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerDied","Data":"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.555501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94926" event={"ID":"8466d496-2ca4-49f2-96ff-75386b047783","Type":"ContainerDied","Data":"34debbefbc30dcbcf8242579dd7b8f5fb6f598706bc3bcc3982e803459afcb17"} Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.555247 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94926" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.569344 4869 scope.go:117] "RemoveContainer" containerID="5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.574276 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.580705 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fjgpv"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.597039 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8466d496-2ca4-49f2-96ff-75386b047783" (UID: "8466d496-2ca4-49f2-96ff-75386b047783"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.605085 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.610964 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6cz2t"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.615806 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.618714 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kv6g"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.619903 4869 scope.go:117] "RemoveContainer" containerID="d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622703 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhs9v\" (UniqueName: \"kubernetes.io/projected/7d0b3ce9-3a56-4562-9534-dc512f82474d-kube-api-access-nhs9v\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622731 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622745 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9fph\" (UniqueName: \"kubernetes.io/projected/25990a28-3536-4602-9439-666774908da0-kube-api-access-l9fph\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622760 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d0b3ce9-3a56-4562-9534-dc512f82474d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622774 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bgfx\" (UniqueName: \"kubernetes.io/projected/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-kube-api-access-2bgfx\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622785 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj949\" (UniqueName: \"kubernetes.io/projected/8466d496-2ca4-49f2-96ff-75386b047783-kube-api-access-jj949\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622797 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9b0bd-b30e-470c-bf30-bd55c35e2e84-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.622811 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8466d496-2ca4-49f2-96ff-75386b047783-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.637471 4869 scope.go:117] "RemoveContainer" containerID="1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.638149 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55\": container with ID starting with 1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55 not found: ID does not exist" containerID="1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.638190 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55"} err="failed to get container status \"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55\": rpc error: code = NotFound desc = could not find container \"1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55\": container with ID starting with 1277829a73c6b7ba3dcfa7ac9f4d54ade41f720fb7065dd4dc891c31572d2c55 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.638218 4869 scope.go:117] "RemoveContainer" containerID="5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.638779 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0\": container with ID starting with 5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0 not found: ID does not exist" containerID="5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.638826 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0"} err="failed to get container status \"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0\": rpc error: code = NotFound desc = could not find container \"5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0\": container with ID starting with 5a1e213c7502bd2dd74e06316a4b8e4414cdd215aff475d380bcbf076d65aad0 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.638862 4869 scope.go:117] "RemoveContainer" containerID="d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.639305 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954\": container with ID starting with d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954 not found: ID does not exist" containerID="d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.639364 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954"} err="failed to get container status \"d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954\": rpc error: code = NotFound desc = could not find container \"d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954\": container with ID starting with d449c6f4521bf1bc9c27774f19d55037523356922b79e51aee68d49086ef4954 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.639399 4869 scope.go:117] "RemoveContainer" containerID="3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.659400 4869 scope.go:117] "RemoveContainer" containerID="3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.660023 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77\": container with ID starting with 3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77 not found: ID does not exist" containerID="3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.660074 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77"} err="failed to get container status \"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77\": rpc error: code = NotFound desc = could not find container \"3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77\": container with ID starting with 3eefb8cc2f73599577ac543d2c7fc3f183a0e893cf2af5bdeaa61ebe11893f77 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.660109 4869 scope.go:117] "RemoveContainer" containerID="677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.714762 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" path="/var/lib/kubelet/pods/3b454c3f-60ab-4a89-ab1e-e1e15cf08b66/volumes" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.716400 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" path="/var/lib/kubelet/pods/40c9b0bd-b30e-470c-bf30-bd55c35e2e84/volumes" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.717632 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" path="/var/lib/kubelet/pods/7d0b3ce9-3a56-4562-9534-dc512f82474d/volumes" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.729283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25990a28-3536-4602-9439-666774908da0" (UID: "25990a28-3536-4602-9439-666774908da0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.733908 4869 scope.go:117] "RemoveContainer" containerID="2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.765833 4869 scope.go:117] "RemoveContainer" containerID="9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.784356 4869 scope.go:117] "RemoveContainer" containerID="677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.784817 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62\": container with ID starting with 677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62 not found: ID does not exist" containerID="677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.784847 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62"} err="failed to get container status \"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62\": rpc error: code = NotFound desc = could not find container \"677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62\": container with ID starting with 677f4173083f1779a408ad362bc8babc5c4a9b8c5f8d47ee5e3ab699e34d1a62 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.784873 4869 scope.go:117] "RemoveContainer" containerID="2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.785286 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9\": container with ID starting with 2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9 not found: ID does not exist" containerID="2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.785313 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9"} err="failed to get container status \"2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9\": rpc error: code = NotFound desc = could not find container \"2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9\": container with ID starting with 2d10f6a1f8fe7eabbbdc2db788038a5473c5e325e884900e1d584b134d5a96e9 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.785328 4869 scope.go:117] "RemoveContainer" containerID="9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.785676 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8\": container with ID starting with 9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8 not found: ID does not exist" containerID="9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.785733 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8"} err="failed to get container status \"9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8\": rpc error: code = NotFound desc = could not find container \"9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8\": container with ID starting with 9f03de1d68c6965c58db5ccdae0c0d1499f19ef77d8bd61d423538c0406377d8 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.785772 4869 scope.go:117] "RemoveContainer" containerID="7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.802913 4869 scope.go:117] "RemoveContainer" containerID="47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.822612 4869 scope.go:117] "RemoveContainer" containerID="aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.824874 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25990a28-3536-4602-9439-666774908da0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.845408 4869 scope.go:117] "RemoveContainer" containerID="7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.846005 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416\": container with ID starting with 7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416 not found: ID does not exist" containerID="7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.846054 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416"} err="failed to get container status \"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416\": rpc error: code = NotFound desc = could not find container \"7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416\": container with ID starting with 7b838ed9b8f9a71a0c38b86318ebe6714a18cd005a4cec5ebdb32d40a8ffe416 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.846090 4869 scope.go:117] "RemoveContainer" containerID="47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.846685 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0\": container with ID starting with 47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0 not found: ID does not exist" containerID="47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.846765 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0"} err="failed to get container status \"47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0\": rpc error: code = NotFound desc = could not find container \"47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0\": container with ID starting with 47d3318a758a21c6e8d6d7c7162f2e0b4a7bb2b03f33f9c1333257adc4d1faa0 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.846814 4869 scope.go:117] "RemoveContainer" containerID="aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.847326 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c\": container with ID starting with aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c not found: ID does not exist" containerID="aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.847359 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c"} err="failed to get container status \"aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c\": rpc error: code = NotFound desc = could not find container \"aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c\": container with ID starting with aecb728fdefac90886eede5d682567593db480445530946156108b840b5dba0c not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.847381 4869 scope.go:117] "RemoveContainer" containerID="1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.869174 4869 scope.go:117] "RemoveContainer" containerID="95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.883900 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.888553 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wt8jx"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.891238 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.894455 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94926"] Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.896067 4869 scope.go:117] "RemoveContainer" containerID="a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.912787 4869 scope.go:117] "RemoveContainer" containerID="1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.913476 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235\": container with ID starting with 1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235 not found: ID does not exist" containerID="1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.913606 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235"} err="failed to get container status \"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235\": rpc error: code = NotFound desc = could not find container \"1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235\": container with ID starting with 1e640bd79ee0051099c58a7d5067be24439623bc16581ca3582bee37778dc235 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.913637 4869 scope.go:117] "RemoveContainer" containerID="95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.914048 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952\": container with ID starting with 95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952 not found: ID does not exist" containerID="95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.914188 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952"} err="failed to get container status \"95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952\": rpc error: code = NotFound desc = could not find container \"95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952\": container with ID starting with 95faad4ef598a447185a930fd7bae4170e125c7fb7b0488884aeaba23b47b952 not found: ID does not exist" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.914299 4869 scope.go:117] "RemoveContainer" containerID="a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83" Mar 14 09:04:35 crc kubenswrapper[4869]: E0314 09:04:35.914677 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83\": container with ID starting with a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83 not found: ID does not exist" containerID="a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83" Mar 14 09:04:35 crc kubenswrapper[4869]: I0314 09:04:35.914704 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83"} err="failed to get container status \"a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83\": rpc error: code = NotFound desc = could not find container \"a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83\": container with ID starting with a8b07d22e03df2b9bceffb9469c16b7518472c7246631ac4d7d92c6528044d83 not found: ID does not exist" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.562289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" event={"ID":"0d2388b0-415d-43ea-9d85-a417297abc29","Type":"ContainerStarted","Data":"5043f5aa781c5d8d9972833b714cbbd02820fb6ef1d0d84d5cbc86229e9fed60"} Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.562664 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" event={"ID":"0d2388b0-415d-43ea-9d85-a417297abc29","Type":"ContainerStarted","Data":"20dc34b603287bffd9ebfc9fd12ac90503f580cda67c80d916b0b6e73d0c081f"} Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.562932 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.569906 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.585463 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9twg2" podStartSLOduration=2.585441903 podStartE2EDuration="2.585441903s" podCreationTimestamp="2026-03-14 09:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:04:36.582605521 +0000 UTC m=+429.554887574" watchObservedRunningTime="2026-03-14 09:04:36.585441903 +0000 UTC m=+429.557723956" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.774961 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnql"] Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775222 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25990a28-3536-4602-9439-666774908da0" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775238 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="25990a28-3536-4602-9439-666774908da0" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775251 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775260 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775274 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775285 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775295 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775306 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775320 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775328 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775343 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775352 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775363 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25990a28-3536-4602-9439-666774908da0" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="25990a28-3536-4602-9439-666774908da0" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775384 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775393 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775409 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775419 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="extract-utilities" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775435 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775444 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775455 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775464 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775477 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25990a28-3536-4602-9439-666774908da0" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775485 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="25990a28-3536-4602-9439-666774908da0" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: E0314 09:04:36.775500 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="extract-content" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775636 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="25990a28-3536-4602-9439-666774908da0" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775653 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b454c3f-60ab-4a89-ab1e-e1e15cf08b66" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775665 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8466d496-2ca4-49f2-96ff-75386b047783" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775683 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0b3ce9-3a56-4562-9534-dc512f82474d" containerName="marketplace-operator" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.775693 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c9b0bd-b30e-470c-bf30-bd55c35e2e84" containerName="registry-server" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.776592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.779832 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.791000 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnql"] Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.943235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kptcr\" (UniqueName: \"kubernetes.io/projected/5b63d540-c356-43fe-bf6a-c1f8aad19156-kube-api-access-kptcr\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.943307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-catalog-content\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.943536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-utilities\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.973762 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4d9kq"] Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.975019 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.980656 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 14 09:04:36 crc kubenswrapper[4869]: I0314 09:04:36.988268 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d9kq"] Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.044653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kptcr\" (UniqueName: \"kubernetes.io/projected/5b63d540-c356-43fe-bf6a-c1f8aad19156-kube-api-access-kptcr\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.044757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-catalog-content\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.044818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-utilities\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.045374 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-catalog-content\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.045401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b63d540-c356-43fe-bf6a-c1f8aad19156-utilities\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.064899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kptcr\" (UniqueName: \"kubernetes.io/projected/5b63d540-c356-43fe-bf6a-c1f8aad19156-kube-api-access-kptcr\") pod \"redhat-marketplace-nnnql\" (UID: \"5b63d540-c356-43fe-bf6a-c1f8aad19156\") " pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.107334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.146618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrtl\" (UniqueName: \"kubernetes.io/projected/1fb58f92-8606-4713-b0ea-ff91ddcca450-kube-api-access-gxrtl\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.146697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-utilities\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.146755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-catalog-content\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.248453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxrtl\" (UniqueName: \"kubernetes.io/projected/1fb58f92-8606-4713-b0ea-ff91ddcca450-kube-api-access-gxrtl\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.248551 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-utilities\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.248599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-catalog-content\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.249393 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-catalog-content\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.249591 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fb58f92-8606-4713-b0ea-ff91ddcca450-utilities\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.275861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxrtl\" (UniqueName: \"kubernetes.io/projected/1fb58f92-8606-4713-b0ea-ff91ddcca450-kube-api-access-gxrtl\") pod \"certified-operators-4d9kq\" (UID: \"1fb58f92-8606-4713-b0ea-ff91ddcca450\") " pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.293759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.561551 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnql"] Mar 14 09:04:37 crc kubenswrapper[4869]: W0314 09:04:37.564875 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b63d540_c356_43fe_bf6a_c1f8aad19156.slice/crio-abb6c41893b3e3eb3dd59b9799a82b399fd53248d419deadf2ad109115ed0f2b WatchSource:0}: Error finding container abb6c41893b3e3eb3dd59b9799a82b399fd53248d419deadf2ad109115ed0f2b: Status 404 returned error can't find the container with id abb6c41893b3e3eb3dd59b9799a82b399fd53248d419deadf2ad109115ed0f2b Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.581724 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnql" event={"ID":"5b63d540-c356-43fe-bf6a-c1f8aad19156","Type":"ContainerStarted","Data":"abb6c41893b3e3eb3dd59b9799a82b399fd53248d419deadf2ad109115ed0f2b"} Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.720614 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25990a28-3536-4602-9439-666774908da0" path="/var/lib/kubelet/pods/25990a28-3536-4602-9439-666774908da0/volumes" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.722219 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8466d496-2ca4-49f2-96ff-75386b047783" path="/var/lib/kubelet/pods/8466d496-2ca4-49f2-96ff-75386b047783/volumes" Mar 14 09:04:37 crc kubenswrapper[4869]: I0314 09:04:37.750820 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4d9kq"] Mar 14 09:04:38 crc kubenswrapper[4869]: I0314 09:04:38.587679 4869 generic.go:334] "Generic (PLEG): container finished" podID="5b63d540-c356-43fe-bf6a-c1f8aad19156" containerID="bd616603689011075dcceb40059e8f3b334339e21bf69802fca40e8f81d98a45" exitCode=0 Mar 14 09:04:38 crc kubenswrapper[4869]: I0314 09:04:38.588433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnql" event={"ID":"5b63d540-c356-43fe-bf6a-c1f8aad19156","Type":"ContainerDied","Data":"bd616603689011075dcceb40059e8f3b334339e21bf69802fca40e8f81d98a45"} Mar 14 09:04:38 crc kubenswrapper[4869]: I0314 09:04:38.590656 4869 generic.go:334] "Generic (PLEG): container finished" podID="1fb58f92-8606-4713-b0ea-ff91ddcca450" containerID="b542ec151330eb501dadd65aac3c8e8a0870e91d5389bca30f1b9637324772f6" exitCode=0 Mar 14 09:04:38 crc kubenswrapper[4869]: I0314 09:04:38.590686 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9kq" event={"ID":"1fb58f92-8606-4713-b0ea-ff91ddcca450","Type":"ContainerDied","Data":"b542ec151330eb501dadd65aac3c8e8a0870e91d5389bca30f1b9637324772f6"} Mar 14 09:04:38 crc kubenswrapper[4869]: I0314 09:04:38.592864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9kq" event={"ID":"1fb58f92-8606-4713-b0ea-ff91ddcca450","Type":"ContainerStarted","Data":"ec1bf31d1d5fc0138f82255a46fd2d7ce1fb175be7f6384de031e898a2e87e22"} Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.174653 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pp4fw"] Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.175969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.178665 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.191185 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp4fw"] Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.282773 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnvq\" (UniqueName: \"kubernetes.io/projected/690e1277-d006-4116-a019-5a0c9d2aef19-kube-api-access-ngnvq\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.283342 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-utilities\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.283405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-catalog-content\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.375876 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qc2w7"] Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.377168 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.380384 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.384582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-utilities\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.384689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-catalog-content\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.384776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnvq\" (UniqueName: \"kubernetes.io/projected/690e1277-d006-4116-a019-5a0c9d2aef19-kube-api-access-ngnvq\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.385165 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-utilities\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.385283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e1277-d006-4116-a019-5a0c9d2aef19-catalog-content\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.395804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qc2w7"] Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.415251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnvq\" (UniqueName: \"kubernetes.io/projected/690e1277-d006-4116-a019-5a0c9d2aef19-kube-api-access-ngnvq\") pod \"community-operators-pp4fw\" (UID: \"690e1277-d006-4116-a019-5a0c9d2aef19\") " pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.486123 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-utilities\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.486199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmlh\" (UniqueName: \"kubernetes.io/projected/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-kube-api-access-xlmlh\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.486323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-catalog-content\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.501336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.587197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-catalog-content\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.587271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-utilities\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.587298 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlmlh\" (UniqueName: \"kubernetes.io/projected/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-kube-api-access-xlmlh\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.587858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-catalog-content\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.587939 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-utilities\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.605087 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.605297 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.611612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlmlh\" (UniqueName: \"kubernetes.io/projected/f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9-kube-api-access-xlmlh\") pod \"redhat-operators-qc2w7\" (UID: \"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9\") " pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.700275 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:39 crc kubenswrapper[4869]: I0314 09:04:39.977470 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp4fw"] Mar 14 09:04:39 crc kubenswrapper[4869]: W0314 09:04:39.988540 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod690e1277_d006_4116_a019_5a0c9d2aef19.slice/crio-63543cea5bfe3ad7565c20b5cd44ffdbee8b429707024c10ce50fdd2c9448e68 WatchSource:0}: Error finding container 63543cea5bfe3ad7565c20b5cd44ffdbee8b429707024c10ce50fdd2c9448e68: Status 404 returned error can't find the container with id 63543cea5bfe3ad7565c20b5cd44ffdbee8b429707024c10ce50fdd2c9448e68 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.150075 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qc2w7"] Mar 14 09:04:40 crc kubenswrapper[4869]: W0314 09:04:40.152820 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e2177a_92e8_4d4d_bd3c_429dbfcc2db9.slice/crio-95d387a5dc0a40f4d5f90686655e8d9eecd029db8a47c9c1d42235c0f6c09175 WatchSource:0}: Error finding container 95d387a5dc0a40f4d5f90686655e8d9eecd029db8a47c9c1d42235c0f6c09175: Status 404 returned error can't find the container with id 95d387a5dc0a40f4d5f90686655e8d9eecd029db8a47c9c1d42235c0f6c09175 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.608368 4869 generic.go:334] "Generic (PLEG): container finished" podID="5b63d540-c356-43fe-bf6a-c1f8aad19156" containerID="93491148f0248ef988446ebd1b39cd562d2e78208a327c41734dcc0eb6385a0c" exitCode=0 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.608447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnql" event={"ID":"5b63d540-c356-43fe-bf6a-c1f8aad19156","Type":"ContainerDied","Data":"93491148f0248ef988446ebd1b39cd562d2e78208a327c41734dcc0eb6385a0c"} Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.611534 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9" containerID="058dda480b77043b44ddcbc1a9fe3c40a1efda3bae09aef14d6e56921e37c736" exitCode=0 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.611606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qc2w7" event={"ID":"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9","Type":"ContainerDied","Data":"058dda480b77043b44ddcbc1a9fe3c40a1efda3bae09aef14d6e56921e37c736"} Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.611635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qc2w7" event={"ID":"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9","Type":"ContainerStarted","Data":"95d387a5dc0a40f4d5f90686655e8d9eecd029db8a47c9c1d42235c0f6c09175"} Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.614419 4869 generic.go:334] "Generic (PLEG): container finished" podID="690e1277-d006-4116-a019-5a0c9d2aef19" containerID="e56e7e7be6a0d06d9ec1fb0dab791c960a8b82b9154db65317f314a79e19fcf8" exitCode=0 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.614577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fw" event={"ID":"690e1277-d006-4116-a019-5a0c9d2aef19","Type":"ContainerDied","Data":"e56e7e7be6a0d06d9ec1fb0dab791c960a8b82b9154db65317f314a79e19fcf8"} Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.614658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fw" event={"ID":"690e1277-d006-4116-a019-5a0c9d2aef19","Type":"ContainerStarted","Data":"63543cea5bfe3ad7565c20b5cd44ffdbee8b429707024c10ce50fdd2c9448e68"} Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.622198 4869 generic.go:334] "Generic (PLEG): container finished" podID="1fb58f92-8606-4713-b0ea-ff91ddcca450" containerID="444c78a0038a5a20fba8a40349e5dbaa7eb59061a745d7d712baf4706b21ef54" exitCode=0 Mar 14 09:04:40 crc kubenswrapper[4869]: I0314 09:04:40.622251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9kq" event={"ID":"1fb58f92-8606-4713-b0ea-ff91ddcca450","Type":"ContainerDied","Data":"444c78a0038a5a20fba8a40349e5dbaa7eb59061a745d7d712baf4706b21ef54"} Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.631043 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnql" event={"ID":"5b63d540-c356-43fe-bf6a-c1f8aad19156","Type":"ContainerStarted","Data":"45f8eeaac6397c16ccc3c9a4fd403d7edfd27b81127e957682e7646e1dce31ca"} Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.634458 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qc2w7" event={"ID":"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9","Type":"ContainerStarted","Data":"04f993e31a9ce0f5befcbdadab1947d6f0107fa7d4f6239643920ea5a7f89e01"} Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.636354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fw" event={"ID":"690e1277-d006-4116-a019-5a0c9d2aef19","Type":"ContainerStarted","Data":"66a0f09237caf0a8721597ca5a371202863d574cf22c006e03665fa5c7367362"} Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.639323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4d9kq" event={"ID":"1fb58f92-8606-4713-b0ea-ff91ddcca450","Type":"ContainerStarted","Data":"aa6ce171951fff900659fae1d4e11b412a6943e5738fda89c8e244665219bebc"} Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.656663 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nnnql" podStartSLOduration=3.197426048 podStartE2EDuration="5.656642811s" podCreationTimestamp="2026-03-14 09:04:36 +0000 UTC" firstStartedPulling="2026-03-14 09:04:38.591710292 +0000 UTC m=+431.563992345" lastFinishedPulling="2026-03-14 09:04:41.050927055 +0000 UTC m=+434.023209108" observedRunningTime="2026-03-14 09:04:41.651483161 +0000 UTC m=+434.623765214" watchObservedRunningTime="2026-03-14 09:04:41.656642811 +0000 UTC m=+434.628924854" Mar 14 09:04:41 crc kubenswrapper[4869]: I0314 09:04:41.715272 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4d9kq" podStartSLOduration=3.168224343 podStartE2EDuration="5.715249044s" podCreationTimestamp="2026-03-14 09:04:36 +0000 UTC" firstStartedPulling="2026-03-14 09:04:38.593189389 +0000 UTC m=+431.565471442" lastFinishedPulling="2026-03-14 09:04:41.14021409 +0000 UTC m=+434.112496143" observedRunningTime="2026-03-14 09:04:41.709951541 +0000 UTC m=+434.682233584" watchObservedRunningTime="2026-03-14 09:04:41.715249044 +0000 UTC m=+434.687531097" Mar 14 09:04:42 crc kubenswrapper[4869]: I0314 09:04:42.648178 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9" containerID="04f993e31a9ce0f5befcbdadab1947d6f0107fa7d4f6239643920ea5a7f89e01" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4869]: I0314 09:04:42.648269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qc2w7" event={"ID":"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9","Type":"ContainerDied","Data":"04f993e31a9ce0f5befcbdadab1947d6f0107fa7d4f6239643920ea5a7f89e01"} Mar 14 09:04:42 crc kubenswrapper[4869]: I0314 09:04:42.651577 4869 generic.go:334] "Generic (PLEG): container finished" podID="690e1277-d006-4116-a019-5a0c9d2aef19" containerID="66a0f09237caf0a8721597ca5a371202863d574cf22c006e03665fa5c7367362" exitCode=0 Mar 14 09:04:42 crc kubenswrapper[4869]: I0314 09:04:42.651647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fw" event={"ID":"690e1277-d006-4116-a019-5a0c9d2aef19","Type":"ContainerDied","Data":"66a0f09237caf0a8721597ca5a371202863d574cf22c006e03665fa5c7367362"} Mar 14 09:04:43 crc kubenswrapper[4869]: I0314 09:04:43.661127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qc2w7" event={"ID":"f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9","Type":"ContainerStarted","Data":"3789a68c8aef85facfe1c6a3a8adf508ca386babb00286b402af93be13137dfb"} Mar 14 09:04:43 crc kubenswrapper[4869]: I0314 09:04:43.664591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fw" event={"ID":"690e1277-d006-4116-a019-5a0c9d2aef19","Type":"ContainerStarted","Data":"7b81a06d5a16a520d9aa978639f64119fac6a9d21aea4ccd058c8a4f0a321785"} Mar 14 09:04:43 crc kubenswrapper[4869]: I0314 09:04:43.682978 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qc2w7" podStartSLOduration=2.189756895 podStartE2EDuration="4.682953093s" podCreationTimestamp="2026-03-14 09:04:39 +0000 UTC" firstStartedPulling="2026-03-14 09:04:40.613539562 +0000 UTC m=+433.585821625" lastFinishedPulling="2026-03-14 09:04:43.10673577 +0000 UTC m=+436.079017823" observedRunningTime="2026-03-14 09:04:43.678880581 +0000 UTC m=+436.651162674" watchObservedRunningTime="2026-03-14 09:04:43.682953093 +0000 UTC m=+436.655235156" Mar 14 09:04:43 crc kubenswrapper[4869]: I0314 09:04:43.702826 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pp4fw" podStartSLOduration=2.242527621 podStartE2EDuration="4.702801642s" podCreationTimestamp="2026-03-14 09:04:39 +0000 UTC" firstStartedPulling="2026-03-14 09:04:40.624252871 +0000 UTC m=+433.596534924" lastFinishedPulling="2026-03-14 09:04:43.084526872 +0000 UTC m=+436.056808945" observedRunningTime="2026-03-14 09:04:43.698857013 +0000 UTC m=+436.671139106" watchObservedRunningTime="2026-03-14 09:04:43.702801642 +0000 UTC m=+436.675083735" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.107876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.109691 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.191081 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.294396 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.294494 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.338369 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.755978 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4d9kq" Mar 14 09:04:47 crc kubenswrapper[4869]: I0314 09:04:47.762055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nnnql" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.502272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.502352 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.548812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.701698 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.701770 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:49 crc kubenswrapper[4869]: I0314 09:04:49.757870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pp4fw" Mar 14 09:04:50 crc kubenswrapper[4869]: I0314 09:04:50.739318 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qc2w7" podUID="f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9" containerName="registry-server" probeResult="failure" output=< Mar 14 09:04:50 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:04:50 crc kubenswrapper[4869]: > Mar 14 09:04:59 crc kubenswrapper[4869]: I0314 09:04:59.747550 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:04:59 crc kubenswrapper[4869]: I0314 09:04:59.796137 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qc2w7" Mar 14 09:05:09 crc kubenswrapper[4869]: I0314 09:05:09.606041 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:05:09 crc kubenswrapper[4869]: I0314 09:05:09.606726 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:05:39 crc kubenswrapper[4869]: I0314 09:05:39.606012 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:05:39 crc kubenswrapper[4869]: I0314 09:05:39.607039 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:05:39 crc kubenswrapper[4869]: I0314 09:05:39.607119 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:05:39 crc kubenswrapper[4869]: I0314 09:05:39.608094 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:05:39 crc kubenswrapper[4869]: I0314 09:05:39.608248 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0" gracePeriod=600 Mar 14 09:05:40 crc kubenswrapper[4869]: I0314 09:05:40.052263 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0" exitCode=0 Mar 14 09:05:40 crc kubenswrapper[4869]: I0314 09:05:40.052356 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0"} Mar 14 09:05:40 crc kubenswrapper[4869]: I0314 09:05:40.052860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063"} Mar 14 09:05:40 crc kubenswrapper[4869]: I0314 09:05:40.052902 4869 scope.go:117] "RemoveContainer" containerID="dc8ca6407dbc54e2f1d4665dcd0cf9493671ce69ac77cbec078d197d598c079b" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.167729 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557986-s59q5"] Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.170390 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-s59q5"] Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.170612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.173619 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.174741 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.174799 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.229605 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdbf\" (UniqueName: \"kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf\") pod \"auto-csr-approver-29557986-s59q5\" (UID: \"c2171b9a-7258-4bad-97b8-37d0f4a599b2\") " pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.332309 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdbf\" (UniqueName: \"kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf\") pod \"auto-csr-approver-29557986-s59q5\" (UID: \"c2171b9a-7258-4bad-97b8-37d0f4a599b2\") " pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.359726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdbf\" (UniqueName: \"kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf\") pod \"auto-csr-approver-29557986-s59q5\" (UID: \"c2171b9a-7258-4bad-97b8-37d0f4a599b2\") " pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.492162 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:00 crc kubenswrapper[4869]: I0314 09:06:00.928553 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-s59q5"] Mar 14 09:06:01 crc kubenswrapper[4869]: I0314 09:06:01.208832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-s59q5" event={"ID":"c2171b9a-7258-4bad-97b8-37d0f4a599b2","Type":"ContainerStarted","Data":"b7d30d83bb8c22ad194882cf57e3cdbc11ff1866fa0f74dc72acf60e880a5fa8"} Mar 14 09:06:02 crc kubenswrapper[4869]: I0314 09:06:02.218151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-s59q5" event={"ID":"c2171b9a-7258-4bad-97b8-37d0f4a599b2","Type":"ContainerStarted","Data":"6a802fd31527e618e7d4de122a40304ff7b29d7bd5c99411bf5fb9b60dbb4601"} Mar 14 09:06:02 crc kubenswrapper[4869]: I0314 09:06:02.247493 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557986-s59q5" podStartSLOduration=1.477106675 podStartE2EDuration="2.24745842s" podCreationTimestamp="2026-03-14 09:06:00 +0000 UTC" firstStartedPulling="2026-03-14 09:06:00.937889978 +0000 UTC m=+513.910172041" lastFinishedPulling="2026-03-14 09:06:01.708241693 +0000 UTC m=+514.680523786" observedRunningTime="2026-03-14 09:06:02.239630083 +0000 UTC m=+515.211912176" watchObservedRunningTime="2026-03-14 09:06:02.24745842 +0000 UTC m=+515.219740513" Mar 14 09:06:03 crc kubenswrapper[4869]: I0314 09:06:03.228181 4869 generic.go:334] "Generic (PLEG): container finished" podID="c2171b9a-7258-4bad-97b8-37d0f4a599b2" containerID="6a802fd31527e618e7d4de122a40304ff7b29d7bd5c99411bf5fb9b60dbb4601" exitCode=0 Mar 14 09:06:03 crc kubenswrapper[4869]: I0314 09:06:03.228261 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-s59q5" event={"ID":"c2171b9a-7258-4bad-97b8-37d0f4a599b2","Type":"ContainerDied","Data":"6a802fd31527e618e7d4de122a40304ff7b29d7bd5c99411bf5fb9b60dbb4601"} Mar 14 09:06:04 crc kubenswrapper[4869]: I0314 09:06:04.581658 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:04 crc kubenswrapper[4869]: I0314 09:06:04.697857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdbf\" (UniqueName: \"kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf\") pod \"c2171b9a-7258-4bad-97b8-37d0f4a599b2\" (UID: \"c2171b9a-7258-4bad-97b8-37d0f4a599b2\") " Mar 14 09:06:04 crc kubenswrapper[4869]: I0314 09:06:04.710918 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf" (OuterVolumeSpecName: "kube-api-access-zgdbf") pod "c2171b9a-7258-4bad-97b8-37d0f4a599b2" (UID: "c2171b9a-7258-4bad-97b8-37d0f4a599b2"). InnerVolumeSpecName "kube-api-access-zgdbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:06:04 crc kubenswrapper[4869]: I0314 09:06:04.799931 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdbf\" (UniqueName: \"kubernetes.io/projected/c2171b9a-7258-4bad-97b8-37d0f4a599b2-kube-api-access-zgdbf\") on node \"crc\" DevicePath \"\"" Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.252241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557986-s59q5" event={"ID":"c2171b9a-7258-4bad-97b8-37d0f4a599b2","Type":"ContainerDied","Data":"b7d30d83bb8c22ad194882cf57e3cdbc11ff1866fa0f74dc72acf60e880a5fa8"} Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.252841 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d30d83bb8c22ad194882cf57e3cdbc11ff1866fa0f74dc72acf60e880a5fa8" Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.252344 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557986-s59q5" Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.658559 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-9t5kk"] Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.668740 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557980-9t5kk"] Mar 14 09:06:05 crc kubenswrapper[4869]: I0314 09:06:05.713775 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3ce98b-d0f8-4fda-84cb-390a11eb508e" path="/var/lib/kubelet/pods/db3ce98b-d0f8-4fda-84cb-390a11eb508e/volumes" Mar 14 09:07:39 crc kubenswrapper[4869]: I0314 09:07:39.605887 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:07:39 crc kubenswrapper[4869]: I0314 09:07:39.607662 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.161640 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557988-9bhtd"] Mar 14 09:08:00 crc kubenswrapper[4869]: E0314 09:08:00.162798 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2171b9a-7258-4bad-97b8-37d0f4a599b2" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.162825 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2171b9a-7258-4bad-97b8-37d0f4a599b2" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.163059 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2171b9a-7258-4bad-97b8-37d0f4a599b2" containerName="oc" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.164976 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.168243 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.168447 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.168816 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.170618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-9bhtd"] Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.309065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tqt\" (UniqueName: \"kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt\") pod \"auto-csr-approver-29557988-9bhtd\" (UID: \"c274eb1e-ca49-4363-9eab-6508b6268654\") " pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.410572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tqt\" (UniqueName: \"kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt\") pod \"auto-csr-approver-29557988-9bhtd\" (UID: \"c274eb1e-ca49-4363-9eab-6508b6268654\") " pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.446365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tqt\" (UniqueName: \"kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt\") pod \"auto-csr-approver-29557988-9bhtd\" (UID: \"c274eb1e-ca49-4363-9eab-6508b6268654\") " pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.493486 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.756774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-9bhtd"] Mar 14 09:08:00 crc kubenswrapper[4869]: W0314 09:08:00.766278 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc274eb1e_ca49_4363_9eab_6508b6268654.slice/crio-bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d WatchSource:0}: Error finding container bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d: Status 404 returned error can't find the container with id bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d Mar 14 09:08:00 crc kubenswrapper[4869]: I0314 09:08:00.768717 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:08:01 crc kubenswrapper[4869]: I0314 09:08:01.093077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" event={"ID":"c274eb1e-ca49-4363-9eab-6508b6268654","Type":"ContainerStarted","Data":"bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d"} Mar 14 09:08:02 crc kubenswrapper[4869]: I0314 09:08:02.100866 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" event={"ID":"c274eb1e-ca49-4363-9eab-6508b6268654","Type":"ContainerStarted","Data":"f4efcd7105b78b04fc7894ba7c222559706a70608d2cd0700012771fc3fe1b6f"} Mar 14 09:08:03 crc kubenswrapper[4869]: I0314 09:08:03.109906 4869 generic.go:334] "Generic (PLEG): container finished" podID="c274eb1e-ca49-4363-9eab-6508b6268654" containerID="f4efcd7105b78b04fc7894ba7c222559706a70608d2cd0700012771fc3fe1b6f" exitCode=0 Mar 14 09:08:03 crc kubenswrapper[4869]: I0314 09:08:03.109971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" event={"ID":"c274eb1e-ca49-4363-9eab-6508b6268654","Type":"ContainerDied","Data":"f4efcd7105b78b04fc7894ba7c222559706a70608d2cd0700012771fc3fe1b6f"} Mar 14 09:08:04 crc kubenswrapper[4869]: I0314 09:08:04.399820 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:04 crc kubenswrapper[4869]: I0314 09:08:04.571566 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tqt\" (UniqueName: \"kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt\") pod \"c274eb1e-ca49-4363-9eab-6508b6268654\" (UID: \"c274eb1e-ca49-4363-9eab-6508b6268654\") " Mar 14 09:08:04 crc kubenswrapper[4869]: I0314 09:08:04.578546 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt" (OuterVolumeSpecName: "kube-api-access-h5tqt") pod "c274eb1e-ca49-4363-9eab-6508b6268654" (UID: "c274eb1e-ca49-4363-9eab-6508b6268654"). InnerVolumeSpecName "kube-api-access-h5tqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:08:04 crc kubenswrapper[4869]: I0314 09:08:04.673765 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tqt\" (UniqueName: \"kubernetes.io/projected/c274eb1e-ca49-4363-9eab-6508b6268654-kube-api-access-h5tqt\") on node \"crc\" DevicePath \"\"" Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.128836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" event={"ID":"c274eb1e-ca49-4363-9eab-6508b6268654","Type":"ContainerDied","Data":"bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d"} Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.128904 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bee7b32b50a609842e43b0081eec312a6445fcf5ac0efddf146ef3b03931c61d" Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.128988 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557988-9bhtd" Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.186230 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-m47g2"] Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.189649 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557982-m47g2"] Mar 14 09:08:05 crc kubenswrapper[4869]: I0314 09:08:05.715173 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fc8bb0-4a61-40cf-809f-408035a85c2e" path="/var/lib/kubelet/pods/28fc8bb0-4a61-40cf-809f-408035a85c2e/volumes" Mar 14 09:08:09 crc kubenswrapper[4869]: I0314 09:08:09.606462 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:08:09 crc kubenswrapper[4869]: I0314 09:08:09.607014 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.608018 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wvrh7"] Mar 14 09:08:34 crc kubenswrapper[4869]: E0314 09:08:34.609125 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c274eb1e-ca49-4363-9eab-6508b6268654" containerName="oc" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.609164 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c274eb1e-ca49-4363-9eab-6508b6268654" containerName="oc" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.609340 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c274eb1e-ca49-4363-9eab-6508b6268654" containerName="oc" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.610014 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.624419 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wvrh7"] Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804429 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-trusted-ca\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/509192dc-2076-42ba-9c82-3405d2b21cfb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjb46\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-kube-api-access-rjb46\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-certificates\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-bound-sa-token\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-tls\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/509192dc-2076-42ba-9c82-3405d2b21cfb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.804770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.836780 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.905931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-tls\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/509192dc-2076-42ba-9c82-3405d2b21cfb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-trusted-ca\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/509192dc-2076-42ba-9c82-3405d2b21cfb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjb46\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-kube-api-access-rjb46\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-certificates\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.906218 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-bound-sa-token\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.907421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/509192dc-2076-42ba-9c82-3405d2b21cfb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.908218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-trusted-ca\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.909024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-certificates\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.914495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/509192dc-2076-42ba-9c82-3405d2b21cfb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.914562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-registry-tls\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.928007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjb46\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-kube-api-access-rjb46\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:34 crc kubenswrapper[4869]: I0314 09:08:34.933394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/509192dc-2076-42ba-9c82-3405d2b21cfb-bound-sa-token\") pod \"image-registry-66df7c8f76-wvrh7\" (UID: \"509192dc-2076-42ba-9c82-3405d2b21cfb\") " pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:35 crc kubenswrapper[4869]: I0314 09:08:35.233124 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:35 crc kubenswrapper[4869]: I0314 09:08:35.501622 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wvrh7"] Mar 14 09:08:35 crc kubenswrapper[4869]: W0314 09:08:35.515694 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod509192dc_2076_42ba_9c82_3405d2b21cfb.slice/crio-9d078efeb5ba7e76589cd2fc37ae3cbe1f90ae00686afda200a4cd06673394e9 WatchSource:0}: Error finding container 9d078efeb5ba7e76589cd2fc37ae3cbe1f90ae00686afda200a4cd06673394e9: Status 404 returned error can't find the container with id 9d078efeb5ba7e76589cd2fc37ae3cbe1f90ae00686afda200a4cd06673394e9 Mar 14 09:08:36 crc kubenswrapper[4869]: I0314 09:08:36.359123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" event={"ID":"509192dc-2076-42ba-9c82-3405d2b21cfb","Type":"ContainerStarted","Data":"379b1d1d2d06b8c6fc3cea73a4a46f9d4c49bab3f0858ddec0d17fab37a09041"} Mar 14 09:08:36 crc kubenswrapper[4869]: I0314 09:08:36.360725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" event={"ID":"509192dc-2076-42ba-9c82-3405d2b21cfb","Type":"ContainerStarted","Data":"9d078efeb5ba7e76589cd2fc37ae3cbe1f90ae00686afda200a4cd06673394e9"} Mar 14 09:08:36 crc kubenswrapper[4869]: I0314 09:08:36.360784 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:36 crc kubenswrapper[4869]: I0314 09:08:36.390908 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" podStartSLOduration=2.390873141 podStartE2EDuration="2.390873141s" podCreationTimestamp="2026-03-14 09:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:08:36.386095613 +0000 UTC m=+669.358377706" watchObservedRunningTime="2026-03-14 09:08:36.390873141 +0000 UTC m=+669.363155264" Mar 14 09:08:39 crc kubenswrapper[4869]: I0314 09:08:39.605429 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:08:39 crc kubenswrapper[4869]: I0314 09:08:39.605589 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:08:39 crc kubenswrapper[4869]: I0314 09:08:39.605658 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:08:39 crc kubenswrapper[4869]: I0314 09:08:39.606624 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:08:39 crc kubenswrapper[4869]: I0314 09:08:39.606731 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063" gracePeriod=600 Mar 14 09:08:40 crc kubenswrapper[4869]: I0314 09:08:40.390557 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063" exitCode=0 Mar 14 09:08:40 crc kubenswrapper[4869]: I0314 09:08:40.390653 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063"} Mar 14 09:08:40 crc kubenswrapper[4869]: I0314 09:08:40.390957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c"} Mar 14 09:08:40 crc kubenswrapper[4869]: I0314 09:08:40.390987 4869 scope.go:117] "RemoveContainer" containerID="507a06780201c47c66d5c51feef654718e70befa4486d8f6554644934872ffc0" Mar 14 09:08:51 crc kubenswrapper[4869]: I0314 09:08:51.538179 4869 scope.go:117] "RemoveContainer" containerID="f73a201c5709d0cb8fb9c9655cc45b3650362550d18c0d9a5e182e9b4a4863ba" Mar 14 09:08:51 crc kubenswrapper[4869]: I0314 09:08:51.591474 4869 scope.go:117] "RemoveContainer" containerID="9c7696676a23e9ff081bb8cac2b959e068640f92185d39b2ce6d8912c35ed709" Mar 14 09:08:55 crc kubenswrapper[4869]: I0314 09:08:55.241799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wvrh7" Mar 14 09:08:55 crc kubenswrapper[4869]: I0314 09:08:55.359800 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.420058 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" podUID="91339654-6d93-49bd-b48a-d2cf1dde09aa" containerName="registry" containerID="cri-o://14680713befe04a8754a3a341e5ac9e93507206312af9b4973f5bf11e4fba9e5" gracePeriod=30 Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.681033 4869 generic.go:334] "Generic (PLEG): container finished" podID="91339654-6d93-49bd-b48a-d2cf1dde09aa" containerID="14680713befe04a8754a3a341e5ac9e93507206312af9b4973f5bf11e4fba9e5" exitCode=0 Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.681111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" event={"ID":"91339654-6d93-49bd-b48a-d2cf1dde09aa","Type":"ContainerDied","Data":"14680713befe04a8754a3a341e5ac9e93507206312af9b4973f5bf11e4fba9e5"} Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.891046 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.904165 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.904318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.904381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.905853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.905947 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.906121 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.906183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.906215 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.906267 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvskr\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr\") pod \"91339654-6d93-49bd-b48a-d2cf1dde09aa\" (UID: \"91339654-6d93-49bd-b48a-d2cf1dde09aa\") " Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.906540 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.907352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.916895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.930294 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.930621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr" (OuterVolumeSpecName: "kube-api-access-wvskr") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "kube-api-access-wvskr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.930677 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.934860 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:09:20 crc kubenswrapper[4869]: I0314 09:09:20.944610 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "91339654-6d93-49bd-b48a-d2cf1dde09aa" (UID: "91339654-6d93-49bd-b48a-d2cf1dde09aa"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007277 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvskr\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-kube-api-access-wvskr\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007324 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91339654-6d93-49bd-b48a-d2cf1dde09aa-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007337 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007349 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91339654-6d93-49bd-b48a-d2cf1dde09aa-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007363 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91339654-6d93-49bd-b48a-d2cf1dde09aa-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.007374 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91339654-6d93-49bd-b48a-d2cf1dde09aa-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.692764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" event={"ID":"91339654-6d93-49bd-b48a-d2cf1dde09aa","Type":"ContainerDied","Data":"670b368dd37915342cbc9a12922bf7f61dfbc1752e7ba5647141d3fa7ddd3c69"} Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.692868 4869 scope.go:117] "RemoveContainer" containerID="14680713befe04a8754a3a341e5ac9e93507206312af9b4973f5bf11e4fba9e5" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.692928 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fdrdm" Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.737750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:09:21 crc kubenswrapper[4869]: I0314 09:09:21.745229 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fdrdm"] Mar 14 09:09:23 crc kubenswrapper[4869]: I0314 09:09:23.714385 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91339654-6d93-49bd-b48a-d2cf1dde09aa" path="/var/lib/kubelet/pods/91339654-6d93-49bd-b48a-d2cf1dde09aa/volumes" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.804538 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hjngc"] Mar 14 09:09:52 crc kubenswrapper[4869]: E0314 09:09:52.805299 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91339654-6d93-49bd-b48a-d2cf1dde09aa" containerName="registry" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.805312 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="91339654-6d93-49bd-b48a-d2cf1dde09aa" containerName="registry" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.805402 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="91339654-6d93-49bd-b48a-d2cf1dde09aa" containerName="registry" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.805817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.809129 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.809445 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.809538 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-p444n" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.814023 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mqzbs"] Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.814982 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mqzbs" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.817107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-989j8" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.820445 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hjngc"] Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.839568 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4n6nc"] Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.840447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.843430 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kqnnk" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.863153 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4n6nc"] Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.863928 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mqzbs"] Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.927657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqjk\" (UniqueName: \"kubernetes.io/projected/b296f20d-7a2e-4515-9881-d00fe5f3c5ba-kube-api-access-6pqjk\") pod \"cert-manager-webhook-687f57d79b-4n6nc\" (UID: \"b296f20d-7a2e-4515-9881-d00fe5f3c5ba\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.927740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wj4\" (UniqueName: \"kubernetes.io/projected/fabb3acb-23e6-49d7-a021-3c72273147a6-kube-api-access-28wj4\") pod \"cert-manager-cainjector-cf98fcc89-hjngc\" (UID: \"fabb3acb-23e6-49d7-a021-3c72273147a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" Mar 14 09:09:52 crc kubenswrapper[4869]: I0314 09:09:52.927828 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvl2f\" (UniqueName: \"kubernetes.io/projected/dc806080-ac5e-4802-9e6f-eca4be72ab49-kube-api-access-zvl2f\") pod \"cert-manager-858654f9db-mqzbs\" (UID: \"dc806080-ac5e-4802-9e6f-eca4be72ab49\") " pod="cert-manager/cert-manager-858654f9db-mqzbs" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.029072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvl2f\" (UniqueName: \"kubernetes.io/projected/dc806080-ac5e-4802-9e6f-eca4be72ab49-kube-api-access-zvl2f\") pod \"cert-manager-858654f9db-mqzbs\" (UID: \"dc806080-ac5e-4802-9e6f-eca4be72ab49\") " pod="cert-manager/cert-manager-858654f9db-mqzbs" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.029255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqjk\" (UniqueName: \"kubernetes.io/projected/b296f20d-7a2e-4515-9881-d00fe5f3c5ba-kube-api-access-6pqjk\") pod \"cert-manager-webhook-687f57d79b-4n6nc\" (UID: \"b296f20d-7a2e-4515-9881-d00fe5f3c5ba\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.029346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wj4\" (UniqueName: \"kubernetes.io/projected/fabb3acb-23e6-49d7-a021-3c72273147a6-kube-api-access-28wj4\") pod \"cert-manager-cainjector-cf98fcc89-hjngc\" (UID: \"fabb3acb-23e6-49d7-a021-3c72273147a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.049980 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvl2f\" (UniqueName: \"kubernetes.io/projected/dc806080-ac5e-4802-9e6f-eca4be72ab49-kube-api-access-zvl2f\") pod \"cert-manager-858654f9db-mqzbs\" (UID: \"dc806080-ac5e-4802-9e6f-eca4be72ab49\") " pod="cert-manager/cert-manager-858654f9db-mqzbs" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.050446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wj4\" (UniqueName: \"kubernetes.io/projected/fabb3acb-23e6-49d7-a021-3c72273147a6-kube-api-access-28wj4\") pod \"cert-manager-cainjector-cf98fcc89-hjngc\" (UID: \"fabb3acb-23e6-49d7-a021-3c72273147a6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.054336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqjk\" (UniqueName: \"kubernetes.io/projected/b296f20d-7a2e-4515-9881-d00fe5f3c5ba-kube-api-access-6pqjk\") pod \"cert-manager-webhook-687f57d79b-4n6nc\" (UID: \"b296f20d-7a2e-4515-9881-d00fe5f3c5ba\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.133571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.139665 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mqzbs" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.170715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.499950 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-hjngc"] Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.645654 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mqzbs"] Mar 14 09:09:53 crc kubenswrapper[4869]: W0314 09:09:53.651824 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc806080_ac5e_4802_9e6f_eca4be72ab49.slice/crio-227932c48a1c60e87bfe407c63b564816ed593693439fb82917f042d19709b53 WatchSource:0}: Error finding container 227932c48a1c60e87bfe407c63b564816ed593693439fb82917f042d19709b53: Status 404 returned error can't find the container with id 227932c48a1c60e87bfe407c63b564816ed593693439fb82917f042d19709b53 Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.743211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4n6nc"] Mar 14 09:09:53 crc kubenswrapper[4869]: W0314 09:09:53.746776 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb296f20d_7a2e_4515_9881_d00fe5f3c5ba.slice/crio-6458b34f4cf9e7b5f7e1f95223607579bdc897533d30dec14a1b2ab374652e43 WatchSource:0}: Error finding container 6458b34f4cf9e7b5f7e1f95223607579bdc897533d30dec14a1b2ab374652e43: Status 404 returned error can't find the container with id 6458b34f4cf9e7b5f7e1f95223607579bdc897533d30dec14a1b2ab374652e43 Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.931054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" event={"ID":"fabb3acb-23e6-49d7-a021-3c72273147a6","Type":"ContainerStarted","Data":"a14b1d2fe912885aee0cd672e985eb2cd76d0cab1ea19ce6ca067c93f4eb2e9a"} Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.931964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mqzbs" event={"ID":"dc806080-ac5e-4802-9e6f-eca4be72ab49","Type":"ContainerStarted","Data":"227932c48a1c60e87bfe407c63b564816ed593693439fb82917f042d19709b53"} Mar 14 09:09:53 crc kubenswrapper[4869]: I0314 09:09:53.932861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" event={"ID":"b296f20d-7a2e-4515-9881-d00fe5f3c5ba","Type":"ContainerStarted","Data":"6458b34f4cf9e7b5f7e1f95223607579bdc897533d30dec14a1b2ab374652e43"} Mar 14 09:09:56 crc kubenswrapper[4869]: I0314 09:09:56.955649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" event={"ID":"fabb3acb-23e6-49d7-a021-3c72273147a6","Type":"ContainerStarted","Data":"076bab678d1f7b6b87d1a3afb058be191fbf14b730a4647e96c45c83485fd255"} Mar 14 09:09:56 crc kubenswrapper[4869]: I0314 09:09:56.959942 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mqzbs" event={"ID":"dc806080-ac5e-4802-9e6f-eca4be72ab49","Type":"ContainerStarted","Data":"ef13540493b6e4819e88718305e2edd73af8a80357d97351b4f637ac21a7e473"} Mar 14 09:09:56 crc kubenswrapper[4869]: I0314 09:09:56.969541 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-hjngc" podStartSLOduration=1.7954542770000002 podStartE2EDuration="4.969494575s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:53.506395254 +0000 UTC m=+746.478677307" lastFinishedPulling="2026-03-14 09:09:56.680435552 +0000 UTC m=+749.652717605" observedRunningTime="2026-03-14 09:09:56.968386238 +0000 UTC m=+749.940668301" watchObservedRunningTime="2026-03-14 09:09:56.969494575 +0000 UTC m=+749.941776648" Mar 14 09:09:58 crc kubenswrapper[4869]: I0314 09:09:58.462800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mqzbs" podStartSLOduration=3.436410633 podStartE2EDuration="6.46277062s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:53.653841549 +0000 UTC m=+746.626123602" lastFinishedPulling="2026-03-14 09:09:56.680201526 +0000 UTC m=+749.652483589" observedRunningTime="2026-03-14 09:09:56.990539797 +0000 UTC m=+749.962821870" watchObservedRunningTime="2026-03-14 09:09:58.46277062 +0000 UTC m=+751.435052693" Mar 14 09:09:59 crc kubenswrapper[4869]: I0314 09:09:59.982648 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" event={"ID":"b296f20d-7a2e-4515-9881-d00fe5f3c5ba","Type":"ContainerStarted","Data":"1fe2c8d815ae86171808159288aadf46cc4131d4b13c523cbcb5e55ea8636f5c"} Mar 14 09:09:59 crc kubenswrapper[4869]: I0314 09:09:59.983821 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.146200 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" podStartSLOduration=2.8899610620000002 podStartE2EDuration="8.146160048s" podCreationTimestamp="2026-03-14 09:09:52 +0000 UTC" firstStartedPulling="2026-03-14 09:09:53.748619787 +0000 UTC m=+746.720901850" lastFinishedPulling="2026-03-14 09:09:59.004818793 +0000 UTC m=+751.977100836" observedRunningTime="2026-03-14 09:10:00.01835553 +0000 UTC m=+752.990637613" watchObservedRunningTime="2026-03-14 09:10:00.146160048 +0000 UTC m=+753.118442141" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.151753 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557990-d4fln"] Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.156295 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.161176 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.161494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.161730 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.165834 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-d4fln"] Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.347439 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6kt\" (UniqueName: \"kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt\") pod \"auto-csr-approver-29557990-d4fln\" (UID: \"27a61b7e-10da-4b46-9d85-4833360660fe\") " pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.449035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6kt\" (UniqueName: \"kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt\") pod \"auto-csr-approver-29557990-d4fln\" (UID: \"27a61b7e-10da-4b46-9d85-4833360660fe\") " pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.478290 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6kt\" (UniqueName: \"kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt\") pod \"auto-csr-approver-29557990-d4fln\" (UID: \"27a61b7e-10da-4b46-9d85-4833360660fe\") " pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.494639 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:00 crc kubenswrapper[4869]: I0314 09:10:00.983052 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-d4fln"] Mar 14 09:10:01 crc kubenswrapper[4869]: W0314 09:10:01.001690 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27a61b7e_10da_4b46_9d85_4833360660fe.slice/crio-20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063 WatchSource:0}: Error finding container 20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063: Status 404 returned error can't find the container with id 20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063 Mar 14 09:10:02 crc kubenswrapper[4869]: I0314 09:10:02.008045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-d4fln" event={"ID":"27a61b7e-10da-4b46-9d85-4833360660fe","Type":"ContainerStarted","Data":"20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063"} Mar 14 09:10:03 crc kubenswrapper[4869]: I0314 09:10:03.021849 4869 generic.go:334] "Generic (PLEG): container finished" podID="27a61b7e-10da-4b46-9d85-4833360660fe" containerID="3fedee07a079e936f700f4e70f51bf828fce1af58c5fa32aa6f1e372d425dcdd" exitCode=0 Mar 14 09:10:03 crc kubenswrapper[4869]: I0314 09:10:03.021920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-d4fln" event={"ID":"27a61b7e-10da-4b46-9d85-4833360660fe","Type":"ContainerDied","Data":"3fedee07a079e936f700f4e70f51bf828fce1af58c5fa32aa6f1e372d425dcdd"} Mar 14 09:10:04 crc kubenswrapper[4869]: I0314 09:10:04.324638 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:04 crc kubenswrapper[4869]: I0314 09:10:04.507348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6kt\" (UniqueName: \"kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt\") pod \"27a61b7e-10da-4b46-9d85-4833360660fe\" (UID: \"27a61b7e-10da-4b46-9d85-4833360660fe\") " Mar 14 09:10:04 crc kubenswrapper[4869]: I0314 09:10:04.518055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt" (OuterVolumeSpecName: "kube-api-access-ps6kt") pod "27a61b7e-10da-4b46-9d85-4833360660fe" (UID: "27a61b7e-10da-4b46-9d85-4833360660fe"). InnerVolumeSpecName "kube-api-access-ps6kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:10:04 crc kubenswrapper[4869]: I0314 09:10:04.609078 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6kt\" (UniqueName: \"kubernetes.io/projected/27a61b7e-10da-4b46-9d85-4833360660fe-kube-api-access-ps6kt\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.037827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557990-d4fln" event={"ID":"27a61b7e-10da-4b46-9d85-4833360660fe","Type":"ContainerDied","Data":"20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063"} Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.037882 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557990-d4fln" Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.037895 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20af2d190cc8b90bd283a8d102578baaae6777aa4f22d2b8526655c749a27063" Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.417683 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-q2lw7"] Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.426237 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557984-q2lw7"] Mar 14 09:10:05 crc kubenswrapper[4869]: I0314 09:10:05.717089 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce16cfb8-2f11-464c-8fe8-84be308a6131" path="/var/lib/kubelet/pods/ce16cfb8-2f11-464c-8fe8-84be308a6131/volumes" Mar 14 09:10:08 crc kubenswrapper[4869]: I0314 09:10:08.174625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4n6nc" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.320639 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bhcmd"] Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322040 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-controller" containerID="cri-o://75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322632 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="sbdb" containerID="cri-o://7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322699 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="nbdb" containerID="cri-o://24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322759 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="northd" containerID="cri-o://aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322814 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322866 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-node" containerID="cri-o://f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.322918 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-acl-logging" containerID="cri-o://f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.383654 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" containerID="cri-o://93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" gracePeriod=30 Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.699107 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/3.log" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.702486 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovn-acl-logging/0.log" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.703191 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovn-controller/0.log" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.706257 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.772188 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-stbmx"] Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket" (OuterVolumeSpecName: "log-socket") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773120 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tv2d\" (UniqueName: \"kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773874 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.773989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774036 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774315 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch\") pod \"489ada67-a888-460e-862c-cd59acc0c6fe\" (UID: \"489ada67-a888-460e-862c-cd59acc0c6fe\") " Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash" (OuterVolumeSpecName: "host-slash") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774910 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.774964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775040 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775060 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775084 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775100 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-node" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775108 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-node" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775119 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775125 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775133 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775138 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775149 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-acl-logging" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775156 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-acl-logging" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775166 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="nbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775172 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="nbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775183 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a61b7e-10da-4b46-9d85-4833360660fe" containerName="oc" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775192 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a61b7e-10da-4b46-9d85-4833360660fe" containerName="oc" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775201 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kubecfg-setup" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775208 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kubecfg-setup" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775216 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775224 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775234 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="northd" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775241 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="northd" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775255 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="sbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775262 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="sbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775371 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a61b7e-10da-4b46-9d85-4833360660fe" containerName="oc" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775383 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="nbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775393 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775404 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-acl-logging" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775419 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="sbdb" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775429 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775437 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovn-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775446 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="northd" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775455 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775462 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775472 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-node" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775479 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="kube-rbac-proxy-ovn-metrics" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775675 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775688 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: E0314 09:10:27.775699 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775707 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775841 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" containerName="ovnkube-controller" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775038 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775065 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775872 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.775930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.776434 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.776471 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.776496 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.776540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log" (OuterVolumeSpecName: "node-log") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.776558 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.778863 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.780466 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.780776 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d" (OuterVolumeSpecName: "kube-api-access-2tv2d") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "kube-api-access-2tv2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781187 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781216 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-slash\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781229 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-log-socket\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781241 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.781252 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.791057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "489ada67-a888-460e-862c-cd59acc0c6fe" (UID: "489ada67-a888-460e-862c-cd59acc0c6fe"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.881960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-netd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-log-socket\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882429 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-node-log\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7df26e8-826a-44e5-838f-c91825501746-ovn-node-metrics-cert\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-etc-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882802 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-script-lib\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.882983 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25q2\" (UniqueName: \"kubernetes.io/projected/b7df26e8-826a-44e5-838f-c91825501746-kube-api-access-r25q2\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-systemd-units\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883196 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883331 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-var-lib-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883421 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-ovn\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-netns\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-kubelet\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-bin\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883828 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-slash\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883902 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-config\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.883977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-systemd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-env-overrides\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884146 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tv2d\" (UniqueName: \"kubernetes.io/projected/489ada67-a888-460e-862c-cd59acc0c6fe-kube-api-access-2tv2d\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884226 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884312 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884367 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884420 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884470 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884544 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/489ada67-a888-460e-862c-cd59acc0c6fe-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884603 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-node-log\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884659 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884712 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884766 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/489ada67-a888-460e-862c-cd59acc0c6fe-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884819 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884866 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.884936 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/489ada67-a888-460e-862c-cd59acc0c6fe-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.986917 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-netd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-netd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987353 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-log-socket\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-node-log\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-node-log\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7df26e8-826a-44e5-838f-c91825501746-ovn-node-metrics-cert\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-etc-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-script-lib\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r25q2\" (UniqueName: \"kubernetes.io/projected/b7df26e8-826a-44e5-838f-c91825501746-kube-api-access-r25q2\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-systemd-units\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.987991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-etc-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988062 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-var-lib-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-ovn\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-netns\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-kubelet\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-bin\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-config\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988401 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-slash\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-systemd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-env-overrides\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988797 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-script-lib\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.988869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-ovn\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-cni-bin\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989171 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-netns\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-kubelet\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-slash\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-systemd-units\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-host-run-ovn-kubernetes\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.989976 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-var-lib-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.990006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-openvswitch\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.990035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-run-systemd\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.991140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-ovnkube-config\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.991227 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b7df26e8-826a-44e5-838f-c91825501746-env-overrides\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.992158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b7df26e8-826a-44e5-838f-c91825501746-log-socket\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:27 crc kubenswrapper[4869]: I0314 09:10:27.994892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b7df26e8-826a-44e5-838f-c91825501746-ovn-node-metrics-cert\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.007477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r25q2\" (UniqueName: \"kubernetes.io/projected/b7df26e8-826a-44e5-838f-c91825501746-kube-api-access-r25q2\") pod \"ovnkube-node-stbmx\" (UID: \"b7df26e8-826a-44e5-838f-c91825501746\") " pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.103023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:28 crc kubenswrapper[4869]: W0314 09:10:28.137994 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7df26e8_826a_44e5_838f_c91825501746.slice/crio-8c90f899f4ca780ca008458864cdce0406dc4c353cdffe349c8fba2139205fcb WatchSource:0}: Error finding container 8c90f899f4ca780ca008458864cdce0406dc4c353cdffe349c8fba2139205fcb: Status 404 returned error can't find the container with id 8c90f899f4ca780ca008458864cdce0406dc4c353cdffe349c8fba2139205fcb Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.231910 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovnkube-controller/3.log" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.235371 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovn-acl-logging/0.log" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.236152 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bhcmd_489ada67-a888-460e-862c-cd59acc0c6fe/ovn-controller/0.log" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237000 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237239 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237421 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237615 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237840 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238013 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" exitCode=0 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238190 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" exitCode=143 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238363 4869 generic.go:334] "Generic (PLEG): container finished" podID="489ada67-a888-460e-862c-cd59acc0c6fe" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" exitCode=143 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237112 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238758 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238792 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.237107 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238874 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.238855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239093 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239112 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239124 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239136 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239147 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239177 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239188 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239200 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239211 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239245 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239257 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239268 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239279 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239290 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239301 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239312 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239369 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239382 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239393 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239427 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239439 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239450 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239461 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239472 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239482 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239491 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239503 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239542 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239552 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhcmd" event={"ID":"489ada67-a888-460e-862c-cd59acc0c6fe","Type":"ContainerDied","Data":"b246c71dba690ea83fd1de06ec26b68f5e94c2cd8987c710114d2b13571587ef"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239585 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239598 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239608 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239618 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239629 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239639 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239649 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239660 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239671 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.239684 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.248388 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/2.log" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.249126 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/1.log" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.249778 4869 generic.go:334] "Generic (PLEG): container finished" podID="3aedc0f3-51fe-492b-9337-02b2b6e38327" containerID="49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805" exitCode=2 Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.249818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerDied","Data":"49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.249948 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.250910 4869 scope.go:117] "RemoveContainer" containerID="49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.251102 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9nncq_openshift-multus(3aedc0f3-51fe-492b-9337-02b2b6e38327)\"" pod="openshift-multus/multus-9nncq" podUID="3aedc0f3-51fe-492b-9337-02b2b6e38327" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.252384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"8c90f899f4ca780ca008458864cdce0406dc4c353cdffe349c8fba2139205fcb"} Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.272934 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.319576 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bhcmd"] Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.324088 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bhcmd"] Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.324697 4869 scope.go:117] "RemoveContainer" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.350774 4869 scope.go:117] "RemoveContainer" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.410570 4869 scope.go:117] "RemoveContainer" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.431224 4869 scope.go:117] "RemoveContainer" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.444805 4869 scope.go:117] "RemoveContainer" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.459029 4869 scope.go:117] "RemoveContainer" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.489725 4869 scope.go:117] "RemoveContainer" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.505851 4869 scope.go:117] "RemoveContainer" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.519675 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.520068 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520113 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} err="failed to get container status \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520144 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.520393 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": container with ID starting with 9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1 not found: ID does not exist" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520422 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} err="failed to get container status \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": rpc error: code = NotFound desc = could not find container \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": container with ID starting with 9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520440 4869 scope.go:117] "RemoveContainer" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.520818 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": container with ID starting with 7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056 not found: ID does not exist" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520846 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} err="failed to get container status \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": rpc error: code = NotFound desc = could not find container \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": container with ID starting with 7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.520865 4869 scope.go:117] "RemoveContainer" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.521464 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": container with ID starting with 24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806 not found: ID does not exist" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.521561 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} err="failed to get container status \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": rpc error: code = NotFound desc = could not find container \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": container with ID starting with 24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.521581 4869 scope.go:117] "RemoveContainer" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.522043 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": container with ID starting with aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376 not found: ID does not exist" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.522089 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} err="failed to get container status \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": rpc error: code = NotFound desc = could not find container \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": container with ID starting with aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.522124 4869 scope.go:117] "RemoveContainer" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.522544 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": container with ID starting with 7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f not found: ID does not exist" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.522585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} err="failed to get container status \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": rpc error: code = NotFound desc = could not find container \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": container with ID starting with 7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.522625 4869 scope.go:117] "RemoveContainer" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.523031 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": container with ID starting with f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4 not found: ID does not exist" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523073 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} err="failed to get container status \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": rpc error: code = NotFound desc = could not find container \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": container with ID starting with f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523102 4869 scope.go:117] "RemoveContainer" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.523464 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": container with ID starting with f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33 not found: ID does not exist" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523499 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} err="failed to get container status \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": rpc error: code = NotFound desc = could not find container \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": container with ID starting with f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523548 4869 scope.go:117] "RemoveContainer" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.523884 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": container with ID starting with 75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136 not found: ID does not exist" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523929 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} err="failed to get container status \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": rpc error: code = NotFound desc = could not find container \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": container with ID starting with 75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.523956 4869 scope.go:117] "RemoveContainer" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: E0314 09:10:28.524355 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": container with ID starting with 335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92 not found: ID does not exist" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.524397 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} err="failed to get container status \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": rpc error: code = NotFound desc = could not find container \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": container with ID starting with 335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.524425 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.524754 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} err="failed to get container status \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.524785 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.525123 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} err="failed to get container status \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": rpc error: code = NotFound desc = could not find container \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": container with ID starting with 9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.525159 4869 scope.go:117] "RemoveContainer" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.525630 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} err="failed to get container status \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": rpc error: code = NotFound desc = could not find container \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": container with ID starting with 7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.525677 4869 scope.go:117] "RemoveContainer" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.525973 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} err="failed to get container status \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": rpc error: code = NotFound desc = could not find container \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": container with ID starting with 24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.526013 4869 scope.go:117] "RemoveContainer" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.526499 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} err="failed to get container status \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": rpc error: code = NotFound desc = could not find container \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": container with ID starting with aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.526559 4869 scope.go:117] "RemoveContainer" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.527243 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} err="failed to get container status \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": rpc error: code = NotFound desc = could not find container \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": container with ID starting with 7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.527275 4869 scope.go:117] "RemoveContainer" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.527598 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} err="failed to get container status \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": rpc error: code = NotFound desc = could not find container \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": container with ID starting with f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.527624 4869 scope.go:117] "RemoveContainer" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528149 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} err="failed to get container status \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": rpc error: code = NotFound desc = could not find container \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": container with ID starting with f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528173 4869 scope.go:117] "RemoveContainer" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528456 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} err="failed to get container status \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": rpc error: code = NotFound desc = could not find container \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": container with ID starting with 75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528489 4869 scope.go:117] "RemoveContainer" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528851 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} err="failed to get container status \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": rpc error: code = NotFound desc = could not find container \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": container with ID starting with 335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.528875 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.529262 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} err="failed to get container status \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.529297 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.529713 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} err="failed to get container status \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": rpc error: code = NotFound desc = could not find container \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": container with ID starting with 9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.529743 4869 scope.go:117] "RemoveContainer" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530065 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} err="failed to get container status \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": rpc error: code = NotFound desc = could not find container \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": container with ID starting with 7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530104 4869 scope.go:117] "RemoveContainer" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530418 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} err="failed to get container status \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": rpc error: code = NotFound desc = could not find container \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": container with ID starting with 24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530453 4869 scope.go:117] "RemoveContainer" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530850 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} err="failed to get container status \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": rpc error: code = NotFound desc = could not find container \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": container with ID starting with aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.530882 4869 scope.go:117] "RemoveContainer" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531229 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} err="failed to get container status \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": rpc error: code = NotFound desc = could not find container \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": container with ID starting with 7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531258 4869 scope.go:117] "RemoveContainer" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531593 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} err="failed to get container status \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": rpc error: code = NotFound desc = could not find container \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": container with ID starting with f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531619 4869 scope.go:117] "RemoveContainer" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531874 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} err="failed to get container status \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": rpc error: code = NotFound desc = could not find container \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": container with ID starting with f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.531900 4869 scope.go:117] "RemoveContainer" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.532191 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} err="failed to get container status \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": rpc error: code = NotFound desc = could not find container \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": container with ID starting with 75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.532215 4869 scope.go:117] "RemoveContainer" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.532498 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} err="failed to get container status \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": rpc error: code = NotFound desc = could not find container \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": container with ID starting with 335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.532541 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533009 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} err="failed to get container status \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533038 4869 scope.go:117] "RemoveContainer" containerID="9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533379 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1"} err="failed to get container status \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": rpc error: code = NotFound desc = could not find container \"9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1\": container with ID starting with 9179af1c03a79d2dd6d13c8f48a0fed8e512a57177865f15c06dc33093886af1 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533403 4869 scope.go:117] "RemoveContainer" containerID="7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533763 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056"} err="failed to get container status \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": rpc error: code = NotFound desc = could not find container \"7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056\": container with ID starting with 7ab9197ffa3b362c4e36c1afb3c7af6911058a1aa48c09551ff6209302b06056 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.533796 4869 scope.go:117] "RemoveContainer" containerID="24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534071 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806"} err="failed to get container status \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": rpc error: code = NotFound desc = could not find container \"24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806\": container with ID starting with 24ae461a3f25e416745d254f601854ed6842442de6c7558a94637da23804b806 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534094 4869 scope.go:117] "RemoveContainer" containerID="aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534367 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376"} err="failed to get container status \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": rpc error: code = NotFound desc = could not find container \"aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376\": container with ID starting with aa66d3e4f1522cc5cbca10e118de944b5eaf22b6b1866a3e7f45237c900d3376 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534395 4869 scope.go:117] "RemoveContainer" containerID="7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534748 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f"} err="failed to get container status \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": rpc error: code = NotFound desc = could not find container \"7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f\": container with ID starting with 7c79098097bfe63e824c59b49ee710661225aa4db438d08efa9a4cfe42dba74f not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.534785 4869 scope.go:117] "RemoveContainer" containerID="f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535100 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4"} err="failed to get container status \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": rpc error: code = NotFound desc = could not find container \"f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4\": container with ID starting with f8301fa0aec00719ec67f81e12265a2d5be0a19825da110edf44378ef288e6b4 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535127 4869 scope.go:117] "RemoveContainer" containerID="f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535546 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33"} err="failed to get container status \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": rpc error: code = NotFound desc = could not find container \"f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33\": container with ID starting with f82948e4bfc9fcf986592a268ad306feef58459b83342c7b0121c9f8e4213e33 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535570 4869 scope.go:117] "RemoveContainer" containerID="75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535898 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136"} err="failed to get container status \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": rpc error: code = NotFound desc = could not find container \"75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136\": container with ID starting with 75b25404c983b661658215466ab22ba1c453330287950ae064d83c3cdaf56136 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.535928 4869 scope.go:117] "RemoveContainer" containerID="335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.536220 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92"} err="failed to get container status \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": rpc error: code = NotFound desc = could not find container \"335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92\": container with ID starting with 335e2cc00cba0020bea66f04b0f7836f7c031fcf2d76f9f23e11d633f26fea92 not found: ID does not exist" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.536248 4869 scope.go:117] "RemoveContainer" containerID="93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06" Mar 14 09:10:28 crc kubenswrapper[4869]: I0314 09:10:28.536559 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06"} err="failed to get container status \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": rpc error: code = NotFound desc = could not find container \"93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06\": container with ID starting with 93ff5e918e05200409785ae7579c68fc5ea0049ee9241056df7f1db8074dae06 not found: ID does not exist" Mar 14 09:10:29 crc kubenswrapper[4869]: I0314 09:10:29.264433 4869 generic.go:334] "Generic (PLEG): container finished" podID="b7df26e8-826a-44e5-838f-c91825501746" containerID="48022a8e1710cc6026030ef78fb4e06b4c254c7b17a1510c774014613aacb069" exitCode=0 Mar 14 09:10:29 crc kubenswrapper[4869]: I0314 09:10:29.264499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerDied","Data":"48022a8e1710cc6026030ef78fb4e06b4c254c7b17a1510c774014613aacb069"} Mar 14 09:10:29 crc kubenswrapper[4869]: I0314 09:10:29.712078 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489ada67-a888-460e-862c-cd59acc0c6fe" path="/var/lib/kubelet/pods/489ada67-a888-460e-862c-cd59acc0c6fe/volumes" Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"16480ebf79c540359fafb61c57878a8017534a028234362c6b2adbf25158a2f1"} Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281485 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"38567545b5a6e860121d4ebeba65dd3940a2e3441dd6149d7175c0a9f5784993"} Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"0e8113a7eed205eedc9fc83746e3f41dd29c7197b4aa7d582645750310595f10"} Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"6d4ae24d6602841898176408613837b04acff1b802ef9dd2927fd964c150d738"} Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"03e74d32e35c571fef16741aeef67e3ec5a16bb331ba5ec6e781eeea7fd13c7c"} Mar 14 09:10:30 crc kubenswrapper[4869]: I0314 09:10:30.281573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"c258dea389d1b274343b3f6855775d41c59a622490d52a3a585f385e00626034"} Mar 14 09:10:33 crc kubenswrapper[4869]: I0314 09:10:33.317970 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"67ad5a2b8d17c2f80e21cbd8d4ee2eaba348926d48d09e3e60e22cfd6a535003"} Mar 14 09:10:35 crc kubenswrapper[4869]: I0314 09:10:35.339862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" event={"ID":"b7df26e8-826a-44e5-838f-c91825501746","Type":"ContainerStarted","Data":"bd34cb8d21494f94a9ad17cd3a1476ddfcf583d6b250ebd01a5ab859ca8af636"} Mar 14 09:10:35 crc kubenswrapper[4869]: I0314 09:10:35.340301 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:35 crc kubenswrapper[4869]: I0314 09:10:35.383297 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" podStartSLOduration=8.383275141 podStartE2EDuration="8.383275141s" podCreationTimestamp="2026-03-14 09:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:10:35.382909083 +0000 UTC m=+788.355191146" watchObservedRunningTime="2026-03-14 09:10:35.383275141 +0000 UTC m=+788.355557194" Mar 14 09:10:35 crc kubenswrapper[4869]: I0314 09:10:35.385665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:36 crc kubenswrapper[4869]: I0314 09:10:36.347819 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:36 crc kubenswrapper[4869]: I0314 09:10:36.348484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:36 crc kubenswrapper[4869]: I0314 09:10:36.396022 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.485754 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs"] Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.487889 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.491412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.505100 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs"] Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.552897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.552980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb26v\" (UniqueName: \"kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.553302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.605318 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.605397 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.654290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.654381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb26v\" (UniqueName: \"kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.654453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.655173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.655366 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.675265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb26v\" (UniqueName: \"kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: I0314 09:10:39.810721 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: E0314 09:10:39.847602 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(98bac3df911e3ea9a2fbb8273f38519194c9ff7ed0ce7c6e78239b333151bebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:39 crc kubenswrapper[4869]: E0314 09:10:39.848143 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(98bac3df911e3ea9a2fbb8273f38519194c9ff7ed0ce7c6e78239b333151bebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: E0314 09:10:39.848197 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(98bac3df911e3ea9a2fbb8273f38519194c9ff7ed0ce7c6e78239b333151bebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:39 crc kubenswrapper[4869]: E0314 09:10:39.848299 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(98bac3df911e3ea9a2fbb8273f38519194c9ff7ed0ce7c6e78239b333151bebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" podUID="698f4362-610d-4426-a6da-e569295eedfd" Mar 14 09:10:40 crc kubenswrapper[4869]: I0314 09:10:40.376675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:40 crc kubenswrapper[4869]: I0314 09:10:40.377777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:40 crc kubenswrapper[4869]: E0314 09:10:40.422813 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(54dec80234b99bfbc781833a67fc600a0a69c207a945261f81465d5a2813aab4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:40 crc kubenswrapper[4869]: E0314 09:10:40.422929 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(54dec80234b99bfbc781833a67fc600a0a69c207a945261f81465d5a2813aab4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:40 crc kubenswrapper[4869]: E0314 09:10:40.422973 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(54dec80234b99bfbc781833a67fc600a0a69c207a945261f81465d5a2813aab4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:40 crc kubenswrapper[4869]: E0314 09:10:40.423064 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(54dec80234b99bfbc781833a67fc600a0a69c207a945261f81465d5a2813aab4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" podUID="698f4362-610d-4426-a6da-e569295eedfd" Mar 14 09:10:41 crc kubenswrapper[4869]: I0314 09:10:41.704185 4869 scope.go:117] "RemoveContainer" containerID="49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805" Mar 14 09:10:41 crc kubenswrapper[4869]: E0314 09:10:41.704501 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9nncq_openshift-multus(3aedc0f3-51fe-492b-9337-02b2b6e38327)\"" pod="openshift-multus/multus-9nncq" podUID="3aedc0f3-51fe-492b-9337-02b2b6e38327" Mar 14 09:10:51 crc kubenswrapper[4869]: I0314 09:10:51.713953 4869 scope.go:117] "RemoveContainer" containerID="42e3c2e283c8a71d32a8d9404bc6c1c1bed71bede41d6d80b85ca92c9b51909c" Mar 14 09:10:51 crc kubenswrapper[4869]: I0314 09:10:51.764786 4869 scope.go:117] "RemoveContainer" containerID="10be57f87f3e16c3a6c121f1a65be02b4ac8ee6056d129974b45c3d6303632a1" Mar 14 09:10:52 crc kubenswrapper[4869]: I0314 09:10:52.468081 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/2.log" Mar 14 09:10:54 crc kubenswrapper[4869]: I0314 09:10:54.703828 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:54 crc kubenswrapper[4869]: I0314 09:10:54.705010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:54 crc kubenswrapper[4869]: E0314 09:10:54.746456 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(a3da99e063d3b047d0b48bedd15c2689ae1468fe790f7e51aea6a5d741e06210): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 14 09:10:54 crc kubenswrapper[4869]: E0314 09:10:54.746586 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(a3da99e063d3b047d0b48bedd15c2689ae1468fe790f7e51aea6a5d741e06210): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:54 crc kubenswrapper[4869]: E0314 09:10:54.746627 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(a3da99e063d3b047d0b48bedd15c2689ae1468fe790f7e51aea6a5d741e06210): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:10:54 crc kubenswrapper[4869]: E0314 09:10:54.746720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace(698f4362-610d-4426-a6da-e569295eedfd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_openshift-marketplace_698f4362-610d-4426-a6da-e569295eedfd_0(a3da99e063d3b047d0b48bedd15c2689ae1468fe790f7e51aea6a5d741e06210): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" podUID="698f4362-610d-4426-a6da-e569295eedfd" Mar 14 09:10:56 crc kubenswrapper[4869]: I0314 09:10:56.703794 4869 scope.go:117] "RemoveContainer" containerID="49287961c3f78e591c0ac0e3cdfe6d5f5e67d4326b3c7307a7d036815caf7805" Mar 14 09:10:57 crc kubenswrapper[4869]: I0314 09:10:57.508052 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9nncq_3aedc0f3-51fe-492b-9337-02b2b6e38327/kube-multus/2.log" Mar 14 09:10:57 crc kubenswrapper[4869]: I0314 09:10:57.508952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9nncq" event={"ID":"3aedc0f3-51fe-492b-9337-02b2b6e38327","Type":"ContainerStarted","Data":"4783ef8edc99594b8e72f466d897ca5ebe8da246b104ff2f6ea2a04f42ef8e69"} Mar 14 09:10:58 crc kubenswrapper[4869]: I0314 09:10:58.143443 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-stbmx" Mar 14 09:11:09 crc kubenswrapper[4869]: I0314 09:11:09.605029 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:11:09 crc kubenswrapper[4869]: I0314 09:11:09.606073 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:11:09 crc kubenswrapper[4869]: I0314 09:11:09.703312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:11:09 crc kubenswrapper[4869]: I0314 09:11:09.703933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:11:09 crc kubenswrapper[4869]: I0314 09:11:09.967816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs"] Mar 14 09:11:09 crc kubenswrapper[4869]: W0314 09:11:09.976817 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod698f4362_610d_4426_a6da_e569295eedfd.slice/crio-b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903 WatchSource:0}: Error finding container b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903: Status 404 returned error can't find the container with id b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903 Mar 14 09:11:10 crc kubenswrapper[4869]: I0314 09:11:10.629693 4869 generic.go:334] "Generic (PLEG): container finished" podID="698f4362-610d-4426-a6da-e569295eedfd" containerID="cbb706b58dacfea24b1464fb64777349892a5cc98bbee9f1473b4c2e8564224e" exitCode=0 Mar 14 09:11:10 crc kubenswrapper[4869]: I0314 09:11:10.629995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" event={"ID":"698f4362-610d-4426-a6da-e569295eedfd","Type":"ContainerDied","Data":"cbb706b58dacfea24b1464fb64777349892a5cc98bbee9f1473b4c2e8564224e"} Mar 14 09:11:10 crc kubenswrapper[4869]: I0314 09:11:10.630787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" event={"ID":"698f4362-610d-4426-a6da-e569295eedfd","Type":"ContainerStarted","Data":"b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903"} Mar 14 09:11:12 crc kubenswrapper[4869]: I0314 09:11:12.648603 4869 generic.go:334] "Generic (PLEG): container finished" podID="698f4362-610d-4426-a6da-e569295eedfd" containerID="8987687ceda810847a8b157288c47be1bffcf7556a1134ff96949d6aa92c6bb0" exitCode=0 Mar 14 09:11:12 crc kubenswrapper[4869]: I0314 09:11:12.648692 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" event={"ID":"698f4362-610d-4426-a6da-e569295eedfd","Type":"ContainerDied","Data":"8987687ceda810847a8b157288c47be1bffcf7556a1134ff96949d6aa92c6bb0"} Mar 14 09:11:13 crc kubenswrapper[4869]: I0314 09:11:13.660991 4869 generic.go:334] "Generic (PLEG): container finished" podID="698f4362-610d-4426-a6da-e569295eedfd" containerID="7be97aa5366928a42f0c761feb1b42e434cb7b0db598cfcef3b94e8ba28f5b40" exitCode=0 Mar 14 09:11:13 crc kubenswrapper[4869]: I0314 09:11:13.661070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" event={"ID":"698f4362-610d-4426-a6da-e569295eedfd","Type":"ContainerDied","Data":"7be97aa5366928a42f0c761feb1b42e434cb7b0db598cfcef3b94e8ba28f5b40"} Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.039062 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.177529 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb26v\" (UniqueName: \"kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v\") pod \"698f4362-610d-4426-a6da-e569295eedfd\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.177956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle\") pod \"698f4362-610d-4426-a6da-e569295eedfd\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.178147 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util\") pod \"698f4362-610d-4426-a6da-e569295eedfd\" (UID: \"698f4362-610d-4426-a6da-e569295eedfd\") " Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.180960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle" (OuterVolumeSpecName: "bundle") pod "698f4362-610d-4426-a6da-e569295eedfd" (UID: "698f4362-610d-4426-a6da-e569295eedfd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.187751 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v" (OuterVolumeSpecName: "kube-api-access-qb26v") pod "698f4362-610d-4426-a6da-e569295eedfd" (UID: "698f4362-610d-4426-a6da-e569295eedfd"). InnerVolumeSpecName "kube-api-access-qb26v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.209663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util" (OuterVolumeSpecName: "util") pod "698f4362-610d-4426-a6da-e569295eedfd" (UID: "698f4362-610d-4426-a6da-e569295eedfd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.280182 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb26v\" (UniqueName: \"kubernetes.io/projected/698f4362-610d-4426-a6da-e569295eedfd-kube-api-access-qb26v\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.280223 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.280236 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/698f4362-610d-4426-a6da-e569295eedfd-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.676584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" event={"ID":"698f4362-610d-4426-a6da-e569295eedfd","Type":"ContainerDied","Data":"b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903"} Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.676629 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89efd43420361b1618dcd20032ec19aae250600e69341e1044b7c9b34986903" Mar 14 09:11:15 crc kubenswrapper[4869]: I0314 09:11:15.676649 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs" Mar 14 09:11:22 crc kubenswrapper[4869]: I0314 09:11:22.693227 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.088114 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg"] Mar 14 09:11:29 crc kubenswrapper[4869]: E0314 09:11:29.088907 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="pull" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.088922 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="pull" Mar 14 09:11:29 crc kubenswrapper[4869]: E0314 09:11:29.088941 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="util" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.088948 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="util" Mar 14 09:11:29 crc kubenswrapper[4869]: E0314 09:11:29.088963 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="extract" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.088972 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="extract" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.089094 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="698f4362-610d-4426-a6da-e569295eedfd" containerName="extract" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.089582 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.091659 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vl99d" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.092033 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.092370 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.100051 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.148692 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.149531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.151761 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.151817 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-hq9kd" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.170878 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.171567 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.184037 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.210279 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.210339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9gv\" (UniqueName: \"kubernetes.io/projected/3dca19a5-2b14-442a-b257-8fdd673d7a23-kube-api-access-fv9gv\") pod \"obo-prometheus-operator-68bc856cb9-zc9kg\" (UID: \"3dca19a5-2b14-442a-b257-8fdd673d7a23\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.210385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.210402 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.210469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.218498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.311276 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w2q6b"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.311942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.311999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.312031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv9gv\" (UniqueName: \"kubernetes.io/projected/3dca19a5-2b14-442a-b257-8fdd673d7a23-kube-api-access-fv9gv\") pod \"obo-prometheus-operator-68bc856cb9-zc9kg\" (UID: \"3dca19a5-2b14-442a-b257-8fdd673d7a23\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.312054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.312077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.312148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.316923 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rgntt" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.317083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.316922 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.317629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-pfxf9\" (UID: \"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.318946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.319265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd7586a9-5944-496e-95a7-c62cacd45de7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-95768cd78-9xsl8\" (UID: \"fd7586a9-5944-496e-95a7-c62cacd45de7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.331390 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w2q6b"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.345265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv9gv\" (UniqueName: \"kubernetes.io/projected/3dca19a5-2b14-442a-b257-8fdd673d7a23-kube-api-access-fv9gv\") pod \"obo-prometheus-operator-68bc856cb9-zc9kg\" (UID: \"3dca19a5-2b14-442a-b257-8fdd673d7a23\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.406000 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.465924 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.493844 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.524091 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9hb\" (UniqueName: \"kubernetes.io/projected/31fe446c-71c8-4715-988d-513ec60bb444-kube-api-access-wt9hb\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.524420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/31fe446c-71c8-4715-988d-513ec60bb444-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.526312 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wjfwj"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.531682 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.537758 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wjfwj"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.539909 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-c2zmw" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.626782 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnp5\" (UniqueName: \"kubernetes.io/projected/bae7e494-3d8d-4c79-be70-40c1013b81c2-kube-api-access-ntnp5\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.626828 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bae7e494-3d8d-4c79-be70-40c1013b81c2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.626879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9hb\" (UniqueName: \"kubernetes.io/projected/31fe446c-71c8-4715-988d-513ec60bb444-kube-api-access-wt9hb\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.626903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/31fe446c-71c8-4715-988d-513ec60bb444-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.643016 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/31fe446c-71c8-4715-988d-513ec60bb444-observability-operator-tls\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.662558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9hb\" (UniqueName: \"kubernetes.io/projected/31fe446c-71c8-4715-988d-513ec60bb444-kube-api-access-wt9hb\") pod \"observability-operator-59bdc8b94-w2q6b\" (UID: \"31fe446c-71c8-4715-988d-513ec60bb444\") " pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.694938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.702730 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg"] Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.728670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntnp5\" (UniqueName: \"kubernetes.io/projected/bae7e494-3d8d-4c79-be70-40c1013b81c2-kube-api-access-ntnp5\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.728711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bae7e494-3d8d-4c79-be70-40c1013b81c2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.729665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bae7e494-3d8d-4c79-be70-40c1013b81c2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.769324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntnp5\" (UniqueName: \"kubernetes.io/projected/bae7e494-3d8d-4c79-be70-40c1013b81c2-kube-api-access-ntnp5\") pod \"perses-operator-5bf474d74f-wjfwj\" (UID: \"bae7e494-3d8d-4c79-be70-40c1013b81c2\") " pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.772650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" event={"ID":"3dca19a5-2b14-442a-b257-8fdd673d7a23","Type":"ContainerStarted","Data":"90877e0a5eeb87a942643c79d86b33950c52bdd62d86d8bfda42adef68b9e87c"} Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.869806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:29 crc kubenswrapper[4869]: I0314 09:11:29.943005 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9"] Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.024019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8"] Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.167628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-w2q6b"] Mar 14 09:11:30 crc kubenswrapper[4869]: W0314 09:11:30.175117 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31fe446c_71c8_4715_988d_513ec60bb444.slice/crio-6feaec9cf7b41d290dbc1aa9bfedabf345720d3d0c7b5fe268d1832b935b4414 WatchSource:0}: Error finding container 6feaec9cf7b41d290dbc1aa9bfedabf345720d3d0c7b5fe268d1832b935b4414: Status 404 returned error can't find the container with id 6feaec9cf7b41d290dbc1aa9bfedabf345720d3d0c7b5fe268d1832b935b4414 Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.435863 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-wjfwj"] Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.802659 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" event={"ID":"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda","Type":"ContainerStarted","Data":"919befd8daa210e30408eb39369330d7af4737aba22a14d901efa2e4c54c6a52"} Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.804304 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" event={"ID":"31fe446c-71c8-4715-988d-513ec60bb444","Type":"ContainerStarted","Data":"6feaec9cf7b41d290dbc1aa9bfedabf345720d3d0c7b5fe268d1832b935b4414"} Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.806040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" event={"ID":"fd7586a9-5944-496e-95a7-c62cacd45de7","Type":"ContainerStarted","Data":"a9cc759d40050a6ae50bdded5e2a6a63d5e18915a4a20f5d9e70410d7bf68175"} Mar 14 09:11:30 crc kubenswrapper[4869]: I0314 09:11:30.807319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" event={"ID":"bae7e494-3d8d-4c79-be70-40c1013b81c2","Type":"ContainerStarted","Data":"c3d2bfb50b003839b3e4e52110aed3bb9932dca6e0f428827559cecbbcbb65a4"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.605575 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.606338 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.606387 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.606967 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.607017 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c" gracePeriod=600 Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.877845 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" event={"ID":"fd7586a9-5944-496e-95a7-c62cacd45de7","Type":"ContainerStarted","Data":"4015cc2e35f8e077f4ca053e0a4fb26abb39564ed447de93e892a1fbb49c4328"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.879607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" event={"ID":"3dca19a5-2b14-442a-b257-8fdd673d7a23","Type":"ContainerStarted","Data":"d65c9dde9a607e3544f1b1eb860c5fb37737e00693f1cf8558605e7fc681de34"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.886626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" event={"ID":"bae7e494-3d8d-4c79-be70-40c1013b81c2","Type":"ContainerStarted","Data":"ce4d722274273a9ad3103483a030baf9ca6f0b59a54c479d49eb5c6092cac885"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.886865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.890189 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c" exitCode=0 Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.890235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.890257 4869 scope.go:117] "RemoveContainer" containerID="2fdd9eae6cd4bf6b30da6b9cd0dbf05ee2bb3cb545f0871a79f7910b0bf3b063" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.893730 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" event={"ID":"a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda","Type":"ContainerStarted","Data":"656b0391a54374d52011bcdc9d22ebfbcef2b0fe19a3beb1876bf80462920d7c"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.906720 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-9xsl8" podStartSLOduration=2.141752473 podStartE2EDuration="10.906703615s" podCreationTimestamp="2026-03-14 09:11:29 +0000 UTC" firstStartedPulling="2026-03-14 09:11:30.039737953 +0000 UTC m=+843.012020006" lastFinishedPulling="2026-03-14 09:11:38.804689095 +0000 UTC m=+851.776971148" observedRunningTime="2026-03-14 09:11:39.901802894 +0000 UTC m=+852.874084967" watchObservedRunningTime="2026-03-14 09:11:39.906703615 +0000 UTC m=+852.878985658" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.913230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" event={"ID":"31fe446c-71c8-4715-988d-513ec60bb444","Type":"ContainerStarted","Data":"85d969fe2df8633eb79d4d6c85f1c7f8059791dbc5d72b24d95edc92ccefed05"} Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.914012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.938808 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-95768cd78-pfxf9" podStartSLOduration=2.055326732 podStartE2EDuration="10.938787444s" podCreationTimestamp="2026-03-14 09:11:29 +0000 UTC" firstStartedPulling="2026-03-14 09:11:29.953305791 +0000 UTC m=+842.925587844" lastFinishedPulling="2026-03-14 09:11:38.836766503 +0000 UTC m=+851.809048556" observedRunningTime="2026-03-14 09:11:39.934099908 +0000 UTC m=+852.906381971" watchObservedRunningTime="2026-03-14 09:11:39.938787444 +0000 UTC m=+852.911069497" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.980460 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zc9kg" podStartSLOduration=1.885861169 podStartE2EDuration="10.980442338s" podCreationTimestamp="2026-03-14 09:11:29 +0000 UTC" firstStartedPulling="2026-03-14 09:11:29.716674757 +0000 UTC m=+842.688956820" lastFinishedPulling="2026-03-14 09:11:38.811255936 +0000 UTC m=+851.783537989" observedRunningTime="2026-03-14 09:11:39.962735252 +0000 UTC m=+852.935017295" watchObservedRunningTime="2026-03-14 09:11:39.980442338 +0000 UTC m=+852.952724401" Mar 14 09:11:39 crc kubenswrapper[4869]: I0314 09:11:39.995575 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" Mar 14 09:11:40 crc kubenswrapper[4869]: I0314 09:11:40.004640 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" podStartSLOduration=2.6288697389999998 podStartE2EDuration="11.004620902s" podCreationTimestamp="2026-03-14 09:11:29 +0000 UTC" firstStartedPulling="2026-03-14 09:11:30.449678852 +0000 UTC m=+843.421960905" lastFinishedPulling="2026-03-14 09:11:38.825430015 +0000 UTC m=+851.797712068" observedRunningTime="2026-03-14 09:11:39.980869648 +0000 UTC m=+852.953151711" watchObservedRunningTime="2026-03-14 09:11:40.004620902 +0000 UTC m=+852.976902955" Mar 14 09:11:40 crc kubenswrapper[4869]: I0314 09:11:40.004767 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-w2q6b" podStartSLOduration=2.277515219 podStartE2EDuration="11.004761305s" podCreationTimestamp="2026-03-14 09:11:29 +0000 UTC" firstStartedPulling="2026-03-14 09:11:30.178249686 +0000 UTC m=+843.150531729" lastFinishedPulling="2026-03-14 09:11:38.905495762 +0000 UTC m=+851.877777815" observedRunningTime="2026-03-14 09:11:40.002876279 +0000 UTC m=+852.975158322" watchObservedRunningTime="2026-03-14 09:11:40.004761305 +0000 UTC m=+852.977043378" Mar 14 09:11:40 crc kubenswrapper[4869]: I0314 09:11:40.920790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0"} Mar 14 09:11:49 crc kubenswrapper[4869]: I0314 09:11:49.873183 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-wjfwj" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.138009 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557992-qsslm"] Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.139468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.142039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.142093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.146230 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.151161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-qsslm"] Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.336207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcf9p\" (UniqueName: \"kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p\") pod \"auto-csr-approver-29557992-qsslm\" (UID: \"810d831b-f3a6-498d-b1d1-33dc89ef275c\") " pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.437360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcf9p\" (UniqueName: \"kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p\") pod \"auto-csr-approver-29557992-qsslm\" (UID: \"810d831b-f3a6-498d-b1d1-33dc89ef275c\") " pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.465273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcf9p\" (UniqueName: \"kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p\") pod \"auto-csr-approver-29557992-qsslm\" (UID: \"810d831b-f3a6-498d-b1d1-33dc89ef275c\") " pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:00 crc kubenswrapper[4869]: I0314 09:12:00.759155 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:01 crc kubenswrapper[4869]: I0314 09:12:01.250545 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-qsslm"] Mar 14 09:12:01 crc kubenswrapper[4869]: W0314 09:12:01.266938 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810d831b_f3a6_498d_b1d1_33dc89ef275c.slice/crio-d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef WatchSource:0}: Error finding container d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef: Status 404 returned error can't find the container with id d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef Mar 14 09:12:02 crc kubenswrapper[4869]: I0314 09:12:02.051718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-qsslm" event={"ID":"810d831b-f3a6-498d-b1d1-33dc89ef275c","Type":"ContainerStarted","Data":"d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef"} Mar 14 09:12:03 crc kubenswrapper[4869]: I0314 09:12:03.062974 4869 generic.go:334] "Generic (PLEG): container finished" podID="810d831b-f3a6-498d-b1d1-33dc89ef275c" containerID="ee6be54ba3b92dee996fbb9033e564a57c100984441378bcf828afb84a47cf5f" exitCode=0 Mar 14 09:12:03 crc kubenswrapper[4869]: I0314 09:12:03.063099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-qsslm" event={"ID":"810d831b-f3a6-498d-b1d1-33dc89ef275c","Type":"ContainerDied","Data":"ee6be54ba3b92dee996fbb9033e564a57c100984441378bcf828afb84a47cf5f"} Mar 14 09:12:04 crc kubenswrapper[4869]: I0314 09:12:04.387989 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:04 crc kubenswrapper[4869]: I0314 09:12:04.492641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcf9p\" (UniqueName: \"kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p\") pod \"810d831b-f3a6-498d-b1d1-33dc89ef275c\" (UID: \"810d831b-f3a6-498d-b1d1-33dc89ef275c\") " Mar 14 09:12:04 crc kubenswrapper[4869]: I0314 09:12:04.499460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p" (OuterVolumeSpecName: "kube-api-access-kcf9p") pod "810d831b-f3a6-498d-b1d1-33dc89ef275c" (UID: "810d831b-f3a6-498d-b1d1-33dc89ef275c"). InnerVolumeSpecName "kube-api-access-kcf9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:04 crc kubenswrapper[4869]: I0314 09:12:04.594270 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcf9p\" (UniqueName: \"kubernetes.io/projected/810d831b-f3a6-498d-b1d1-33dc89ef275c-kube-api-access-kcf9p\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.111685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557992-qsslm" event={"ID":"810d831b-f3a6-498d-b1d1-33dc89ef275c","Type":"ContainerDied","Data":"d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef"} Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.111736 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d695fd367036d7c669f8ec925db53b0066568b273c460a6a41c13abf1c6ddfef" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.111755 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557992-qsslm" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.448448 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-s59q5"] Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.454274 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557986-s59q5"] Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.710651 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2171b9a-7258-4bad-97b8-37d0f4a599b2" path="/var/lib/kubelet/pods/c2171b9a-7258-4bad-97b8-37d0f4a599b2/volumes" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.851642 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7"] Mar 14 09:12:05 crc kubenswrapper[4869]: E0314 09:12:05.851931 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810d831b-f3a6-498d-b1d1-33dc89ef275c" containerName="oc" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.851950 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="810d831b-f3a6-498d-b1d1-33dc89ef275c" containerName="oc" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.852063 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="810d831b-f3a6-498d-b1d1-33dc89ef275c" containerName="oc" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.853048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.854835 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.867899 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7"] Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.913851 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.913901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:05 crc kubenswrapper[4869]: I0314 09:12:05.913949 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kpx\" (UniqueName: \"kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.015195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kpx\" (UniqueName: \"kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.015254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.015285 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.015718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.015809 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.047913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kpx\" (UniqueName: \"kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.166763 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:06 crc kubenswrapper[4869]: I0314 09:12:06.615714 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7"] Mar 14 09:12:07 crc kubenswrapper[4869]: I0314 09:12:07.123674 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerID="a48ae396c88a009ddb8a9927713116d3c4e51c48e9268739045b6ca5857fe032" exitCode=0 Mar 14 09:12:07 crc kubenswrapper[4869]: I0314 09:12:07.123822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" event={"ID":"3cf8965f-4dc4-402b-91ab-415c90cde24e","Type":"ContainerDied","Data":"a48ae396c88a009ddb8a9927713116d3c4e51c48e9268739045b6ca5857fe032"} Mar 14 09:12:07 crc kubenswrapper[4869]: I0314 09:12:07.124024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" event={"ID":"3cf8965f-4dc4-402b-91ab-415c90cde24e","Type":"ContainerStarted","Data":"19f047be561b86ff7b33226634dd7c2cbe868e09434439ae42c033c4ed77b358"} Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.225171 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.226475 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.244752 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.245159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnw9d\" (UniqueName: \"kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.245195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.262181 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.345992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.346072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnw9d\" (UniqueName: \"kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.346101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.346587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.346662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.371114 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnw9d\" (UniqueName: \"kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d\") pod \"redhat-operators-7bdhf\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.564260 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:08 crc kubenswrapper[4869]: I0314 09:12:08.801386 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:08 crc kubenswrapper[4869]: W0314 09:12:08.814122 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56398221_45fd_44e2_a725_ba2dae4f476f.slice/crio-57f6ab5c08888833bb04ccbd86e72aad303d5f65e03841deb046246c96155f87 WatchSource:0}: Error finding container 57f6ab5c08888833bb04ccbd86e72aad303d5f65e03841deb046246c96155f87: Status 404 returned error can't find the container with id 57f6ab5c08888833bb04ccbd86e72aad303d5f65e03841deb046246c96155f87 Mar 14 09:12:09 crc kubenswrapper[4869]: I0314 09:12:09.146965 4869 generic.go:334] "Generic (PLEG): container finished" podID="56398221-45fd-44e2-a725-ba2dae4f476f" containerID="1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235" exitCode=0 Mar 14 09:12:09 crc kubenswrapper[4869]: I0314 09:12:09.147112 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerDied","Data":"1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235"} Mar 14 09:12:09 crc kubenswrapper[4869]: I0314 09:12:09.147292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerStarted","Data":"57f6ab5c08888833bb04ccbd86e72aad303d5f65e03841deb046246c96155f87"} Mar 14 09:12:09 crc kubenswrapper[4869]: I0314 09:12:09.149205 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerID="ec2dc4ec5df895439f770881653fd5b892eae2cff6a164323b99a0bbdf70c0a3" exitCode=0 Mar 14 09:12:09 crc kubenswrapper[4869]: I0314 09:12:09.149232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" event={"ID":"3cf8965f-4dc4-402b-91ab-415c90cde24e","Type":"ContainerDied","Data":"ec2dc4ec5df895439f770881653fd5b892eae2cff6a164323b99a0bbdf70c0a3"} Mar 14 09:12:10 crc kubenswrapper[4869]: I0314 09:12:10.156069 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerStarted","Data":"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083"} Mar 14 09:12:10 crc kubenswrapper[4869]: I0314 09:12:10.158924 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerID="9e686c1e4f0994baf86c7b7033b111f215e9619f88f176fcfa9943cc8f682793" exitCode=0 Mar 14 09:12:10 crc kubenswrapper[4869]: I0314 09:12:10.158975 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" event={"ID":"3cf8965f-4dc4-402b-91ab-415c90cde24e","Type":"ContainerDied","Data":"9e686c1e4f0994baf86c7b7033b111f215e9619f88f176fcfa9943cc8f682793"} Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.172185 4869 generic.go:334] "Generic (PLEG): container finished" podID="56398221-45fd-44e2-a725-ba2dae4f476f" containerID="77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083" exitCode=0 Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.172250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerDied","Data":"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083"} Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.461249 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.620724 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util\") pod \"3cf8965f-4dc4-402b-91ab-415c90cde24e\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.620770 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kpx\" (UniqueName: \"kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx\") pod \"3cf8965f-4dc4-402b-91ab-415c90cde24e\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.620814 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle\") pod \"3cf8965f-4dc4-402b-91ab-415c90cde24e\" (UID: \"3cf8965f-4dc4-402b-91ab-415c90cde24e\") " Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.624858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle" (OuterVolumeSpecName: "bundle") pod "3cf8965f-4dc4-402b-91ab-415c90cde24e" (UID: "3cf8965f-4dc4-402b-91ab-415c90cde24e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.629691 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx" (OuterVolumeSpecName: "kube-api-access-55kpx") pod "3cf8965f-4dc4-402b-91ab-415c90cde24e" (UID: "3cf8965f-4dc4-402b-91ab-415c90cde24e"). InnerVolumeSpecName "kube-api-access-55kpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.644377 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util" (OuterVolumeSpecName: "util") pod "3cf8965f-4dc4-402b-91ab-415c90cde24e" (UID: "3cf8965f-4dc4-402b-91ab-415c90cde24e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.721736 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.721777 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cf8965f-4dc4-402b-91ab-415c90cde24e-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:11 crc kubenswrapper[4869]: I0314 09:12:11.721800 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kpx\" (UniqueName: \"kubernetes.io/projected/3cf8965f-4dc4-402b-91ab-415c90cde24e-kube-api-access-55kpx\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:12 crc kubenswrapper[4869]: I0314 09:12:12.182264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" event={"ID":"3cf8965f-4dc4-402b-91ab-415c90cde24e","Type":"ContainerDied","Data":"19f047be561b86ff7b33226634dd7c2cbe868e09434439ae42c033c4ed77b358"} Mar 14 09:12:12 crc kubenswrapper[4869]: I0314 09:12:12.182305 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19f047be561b86ff7b33226634dd7c2cbe868e09434439ae42c033c4ed77b358" Mar 14 09:12:12 crc kubenswrapper[4869]: I0314 09:12:12.182358 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7" Mar 14 09:12:12 crc kubenswrapper[4869]: I0314 09:12:12.187839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerStarted","Data":"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0"} Mar 14 09:12:12 crc kubenswrapper[4869]: I0314 09:12:12.209737 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7bdhf" podStartSLOduration=1.460414232 podStartE2EDuration="4.209717913s" podCreationTimestamp="2026-03-14 09:12:08 +0000 UTC" firstStartedPulling="2026-03-14 09:12:09.149244486 +0000 UTC m=+882.121526549" lastFinishedPulling="2026-03-14 09:12:11.898548147 +0000 UTC m=+884.870830230" observedRunningTime="2026-03-14 09:12:12.207609042 +0000 UTC m=+885.179891105" watchObservedRunningTime="2026-03-14 09:12:12.209717913 +0000 UTC m=+885.181999976" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.021364 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-n59gq"] Mar 14 09:12:15 crc kubenswrapper[4869]: E0314 09:12:15.021966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="util" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.021982 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="util" Mar 14 09:12:15 crc kubenswrapper[4869]: E0314 09:12:15.021996 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="pull" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.022004 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="pull" Mar 14 09:12:15 crc kubenswrapper[4869]: E0314 09:12:15.022030 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="extract" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.022038 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="extract" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.022158 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf8965f-4dc4-402b-91ab-415c90cde24e" containerName="extract" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.022684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.028546 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.028982 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.029555 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-46cmz" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.036237 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-n59gq"] Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.163681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9tl\" (UniqueName: \"kubernetes.io/projected/a4c258ce-f170-4d41-81c7-8baff94d2db9-kube-api-access-ld9tl\") pod \"nmstate-operator-796d4cfff4-n59gq\" (UID: \"a4c258ce-f170-4d41-81c7-8baff94d2db9\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.264483 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9tl\" (UniqueName: \"kubernetes.io/projected/a4c258ce-f170-4d41-81c7-8baff94d2db9-kube-api-access-ld9tl\") pod \"nmstate-operator-796d4cfff4-n59gq\" (UID: \"a4c258ce-f170-4d41-81c7-8baff94d2db9\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.291823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9tl\" (UniqueName: \"kubernetes.io/projected/a4c258ce-f170-4d41-81c7-8baff94d2db9-kube-api-access-ld9tl\") pod \"nmstate-operator-796d4cfff4-n59gq\" (UID: \"a4c258ce-f170-4d41-81c7-8baff94d2db9\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.389008 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" Mar 14 09:12:15 crc kubenswrapper[4869]: I0314 09:12:15.595659 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-n59gq"] Mar 14 09:12:16 crc kubenswrapper[4869]: I0314 09:12:16.210918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" event={"ID":"a4c258ce-f170-4d41-81c7-8baff94d2db9","Type":"ContainerStarted","Data":"348001e6ed339c5fd7d582d68475eb9e2b116b8084d18879a4c6c05755a9980d"} Mar 14 09:12:18 crc kubenswrapper[4869]: I0314 09:12:18.565002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:18 crc kubenswrapper[4869]: I0314 09:12:18.565503 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:19 crc kubenswrapper[4869]: I0314 09:12:19.232591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" event={"ID":"a4c258ce-f170-4d41-81c7-8baff94d2db9","Type":"ContainerStarted","Data":"a192b7cdf7ad8d4cd27e39478057f2d05f87b0541ddffd64f1a96acc5b059c26"} Mar 14 09:12:19 crc kubenswrapper[4869]: I0314 09:12:19.257786 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-n59gq" podStartSLOduration=2.770339135 podStartE2EDuration="5.257769411s" podCreationTimestamp="2026-03-14 09:12:14 +0000 UTC" firstStartedPulling="2026-03-14 09:12:15.616652394 +0000 UTC m=+888.588934447" lastFinishedPulling="2026-03-14 09:12:18.10408267 +0000 UTC m=+891.076364723" observedRunningTime="2026-03-14 09:12:19.255018823 +0000 UTC m=+892.227300876" watchObservedRunningTime="2026-03-14 09:12:19.257769411 +0000 UTC m=+892.230051464" Mar 14 09:12:19 crc kubenswrapper[4869]: I0314 09:12:19.616768 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bdhf" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="registry-server" probeResult="failure" output=< Mar 14 09:12:19 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:12:19 crc kubenswrapper[4869]: > Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.006589 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.009646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.010912 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-244rz"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.011803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.012997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xj7s5" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.013139 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.021971 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.027219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-244rz"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.052907 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-965cd"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.053753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.106581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6ab5eba-10b6-4553-a185-c9fee70073c0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.106633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfmv\" (UniqueName: \"kubernetes.io/projected/7def1104-dc9a-43ed-9c74-744352ed80cb-kube-api-access-zcfmv\") pod \"nmstate-metrics-9b8c8685d-pjm8s\" (UID: \"7def1104-dc9a-43ed-9c74-744352ed80cb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.106689 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghlz\" (UniqueName: \"kubernetes.io/projected/d6ab5eba-10b6-4553-a185-c9fee70073c0-kube-api-access-lghlz\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.131870 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.132695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.135420 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.135749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wg4ql" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.136805 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.146448 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-nmstate-lock\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbs74\" (UniqueName: \"kubernetes.io/projected/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-kube-api-access-rbs74\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghlz\" (UniqueName: \"kubernetes.io/projected/d6ab5eba-10b6-4553-a185-c9fee70073c0-kube-api-access-lghlz\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-ovs-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208277 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-dbus-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6ab5eba-10b6-4553-a185-c9fee70073c0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.208313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfmv\" (UniqueName: \"kubernetes.io/projected/7def1104-dc9a-43ed-9c74-744352ed80cb-kube-api-access-zcfmv\") pod \"nmstate-metrics-9b8c8685d-pjm8s\" (UID: \"7def1104-dc9a-43ed-9c74-744352ed80cb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.213069 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6ab5eba-10b6-4553-a185-c9fee70073c0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.222482 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfmv\" (UniqueName: \"kubernetes.io/projected/7def1104-dc9a-43ed-9c74-744352ed80cb-kube-api-access-zcfmv\") pod \"nmstate-metrics-9b8c8685d-pjm8s\" (UID: \"7def1104-dc9a-43ed-9c74-744352ed80cb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.223911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghlz\" (UniqueName: \"kubernetes.io/projected/d6ab5eba-10b6-4553-a185-c9fee70073c0-kube-api-access-lghlz\") pod \"nmstate-webhook-5f558f5558-244rz\" (UID: \"d6ab5eba-10b6-4553-a185-c9fee70073c0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-nmstate-lock\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65kh\" (UniqueName: \"kubernetes.io/projected/69005533-e9a5-4d50-912f-70adb7debd05-kube-api-access-q65kh\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbs74\" (UniqueName: \"kubernetes.io/projected/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-kube-api-access-rbs74\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309934 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.310044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/69005533-e9a5-4d50-912f-70adb7debd05-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.310154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-ovs-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.310230 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-dbus-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.310609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-dbus-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-579b8cb5c4-fhh55"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.311853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.309382 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-nmstate-lock\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.312106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-ovs-socket\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.326938 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-579b8cb5c4-fhh55"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.331025 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.334399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbs74\" (UniqueName: \"kubernetes.io/projected/a7464d00-e0bb-4ff7-9d53-023ea540cf6b-kube-api-access-rbs74\") pod \"nmstate-handler-965cd\" (UID: \"a7464d00-e0bb-4ff7-9d53-023ea540cf6b\") " pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.398792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-trusted-ca-bundle\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65kh\" (UniqueName: \"kubernetes.io/projected/69005533-e9a5-4d50-912f-70adb7debd05-kube-api-access-q65kh\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412786 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-service-ca\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-oauth-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412901 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412921 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/69005533-e9a5-4d50-912f-70adb7debd05-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.412971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-oauth-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.413012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxfl5\" (UniqueName: \"kubernetes.io/projected/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-kube-api-access-sxfl5\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: E0314 09:12:26.413410 4869 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 14 09:12:26 crc kubenswrapper[4869]: E0314 09:12:26.413456 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert podName:69005533-e9a5-4d50-912f-70adb7debd05 nodeName:}" failed. No retries permitted until 2026-03-14 09:12:26.913438874 +0000 UTC m=+899.885720927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-xrxbl" (UID: "69005533-e9a5-4d50-912f-70adb7debd05") : secret "plugin-serving-cert" not found Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.414486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/69005533-e9a5-4d50-912f-70adb7debd05-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.414647 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.430192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65kh\" (UniqueName: \"kubernetes.io/projected/69005533-e9a5-4d50-912f-70adb7debd05-kube-api-access-q65kh\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.515837 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxfl5\" (UniqueName: \"kubernetes.io/projected/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-kube-api-access-sxfl5\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-trusted-ca-bundle\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-service-ca\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516291 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-oauth-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.516390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-oauth-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.517430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.518422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-trusted-ca-bundle\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.518969 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-oauth-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.519450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-service-ca\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.520760 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-oauth-config\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.525746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-console-serving-cert\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.537604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxfl5\" (UniqueName: \"kubernetes.io/projected/79cfb7d8-9fcb-4fb0-8635-d5303f1e3998-kube-api-access-sxfl5\") pod \"console-579b8cb5c4-fhh55\" (UID: \"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998\") " pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.630813 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.650221 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.706893 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-244rz"] Mar 14 09:12:26 crc kubenswrapper[4869]: W0314 09:12:26.715100 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ab5eba_10b6_4553_a185_c9fee70073c0.slice/crio-3ee6a0aa032bf802155ccac13189bfaf3a325066d3c7904a8be6b091d5702374 WatchSource:0}: Error finding container 3ee6a0aa032bf802155ccac13189bfaf3a325066d3c7904a8be6b091d5702374: Status 404 returned error can't find the container with id 3ee6a0aa032bf802155ccac13189bfaf3a325066d3c7904a8be6b091d5702374 Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.829214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-579b8cb5c4-fhh55"] Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.921205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:26 crc kubenswrapper[4869]: I0314 09:12:26.925055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/69005533-e9a5-4d50-912f-70adb7debd05-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xrxbl\" (UID: \"69005533-e9a5-4d50-912f-70adb7debd05\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.048886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.305188 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" event={"ID":"7def1104-dc9a-43ed-9c74-744352ed80cb","Type":"ContainerStarted","Data":"20ae396722d21ebf533d2e4e395bb2359dba7fe83b00591ce3435574d342ebb4"} Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.322018 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-965cd" event={"ID":"a7464d00-e0bb-4ff7-9d53-023ea540cf6b","Type":"ContainerStarted","Data":"6e29a4b3f0803d175a85f7db0b43e112a47e51a3468bd2c764534b187fb1b462"} Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.337192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-579b8cb5c4-fhh55" event={"ID":"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998","Type":"ContainerStarted","Data":"345556a2ed86b94fcf53a9701be527df48146864b4f66ebe04945a8e7d77fe6e"} Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.337253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-579b8cb5c4-fhh55" event={"ID":"79cfb7d8-9fcb-4fb0-8635-d5303f1e3998","Type":"ContainerStarted","Data":"a98ad829c4d9705e614c2f9eff33da29b7c9d147d405a7a454bf031fe879129a"} Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.340792 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl"] Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.348826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" event={"ID":"d6ab5eba-10b6-4553-a185-c9fee70073c0","Type":"ContainerStarted","Data":"3ee6a0aa032bf802155ccac13189bfaf3a325066d3c7904a8be6b091d5702374"} Mar 14 09:12:27 crc kubenswrapper[4869]: I0314 09:12:27.386919 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-579b8cb5c4-fhh55" podStartSLOduration=1.386902946 podStartE2EDuration="1.386902946s" podCreationTimestamp="2026-03-14 09:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:12:27.38217561 +0000 UTC m=+900.354457663" watchObservedRunningTime="2026-03-14 09:12:27.386902946 +0000 UTC m=+900.359184999" Mar 14 09:12:28 crc kubenswrapper[4869]: I0314 09:12:28.359352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" event={"ID":"69005533-e9a5-4d50-912f-70adb7debd05","Type":"ContainerStarted","Data":"5361d8b3e1ff1668902fc28de8076ce8ddbe7e6130fb369c8b89dc6c4e7d315e"} Mar 14 09:12:28 crc kubenswrapper[4869]: I0314 09:12:28.605700 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:28 crc kubenswrapper[4869]: I0314 09:12:28.652493 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:28 crc kubenswrapper[4869]: I0314 09:12:28.837920 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.401900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" event={"ID":"d6ab5eba-10b6-4553-a185-c9fee70073c0","Type":"ContainerStarted","Data":"eb65e3a6728b2b6cf62aa37c53c7f8175b253be86a7efc99a0ee812781f8bc0f"} Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.402548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.405091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" event={"ID":"7def1104-dc9a-43ed-9c74-744352ed80cb","Type":"ContainerStarted","Data":"c630f6bad5708d6b05d562800738a937ea2c9f30f6064a9405ed4dd15284e3a9"} Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.407367 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-965cd" event={"ID":"a7464d00-e0bb-4ff7-9d53-023ea540cf6b","Type":"ContainerStarted","Data":"c4908227a6be68d5a4c3fdf67ab3ad38bf6040bb55c08f795bd047d6382f5ab0"} Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.407435 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7bdhf" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="registry-server" containerID="cri-o://ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0" gracePeriod=2 Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.427519 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" podStartSLOduration=2.988351975 podStartE2EDuration="5.427478814s" podCreationTimestamp="2026-03-14 09:12:25 +0000 UTC" firstStartedPulling="2026-03-14 09:12:26.71767594 +0000 UTC m=+899.689957993" lastFinishedPulling="2026-03-14 09:12:29.156802759 +0000 UTC m=+902.129084832" observedRunningTime="2026-03-14 09:12:30.418932833 +0000 UTC m=+903.391214896" watchObservedRunningTime="2026-03-14 09:12:30.427478814 +0000 UTC m=+903.399760867" Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.443475 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-965cd" podStartSLOduration=1.656796277 podStartE2EDuration="4.443452556s" podCreationTimestamp="2026-03-14 09:12:26 +0000 UTC" firstStartedPulling="2026-03-14 09:12:26.444914827 +0000 UTC m=+899.417196880" lastFinishedPulling="2026-03-14 09:12:29.231571086 +0000 UTC m=+902.203853159" observedRunningTime="2026-03-14 09:12:30.437966402 +0000 UTC m=+903.410248475" watchObservedRunningTime="2026-03-14 09:12:30.443452556 +0000 UTC m=+903.415734619" Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.822382 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.912814 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content\") pod \"56398221-45fd-44e2-a725-ba2dae4f476f\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.913049 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnw9d\" (UniqueName: \"kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d\") pod \"56398221-45fd-44e2-a725-ba2dae4f476f\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.913109 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities\") pod \"56398221-45fd-44e2-a725-ba2dae4f476f\" (UID: \"56398221-45fd-44e2-a725-ba2dae4f476f\") " Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.914277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities" (OuterVolumeSpecName: "utilities") pod "56398221-45fd-44e2-a725-ba2dae4f476f" (UID: "56398221-45fd-44e2-a725-ba2dae4f476f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:30 crc kubenswrapper[4869]: I0314 09:12:30.928870 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d" (OuterVolumeSpecName: "kube-api-access-jnw9d") pod "56398221-45fd-44e2-a725-ba2dae4f476f" (UID: "56398221-45fd-44e2-a725-ba2dae4f476f"). InnerVolumeSpecName "kube-api-access-jnw9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.015001 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnw9d\" (UniqueName: \"kubernetes.io/projected/56398221-45fd-44e2-a725-ba2dae4f476f-kube-api-access-jnw9d\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.015047 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.048774 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56398221-45fd-44e2-a725-ba2dae4f476f" (UID: "56398221-45fd-44e2-a725-ba2dae4f476f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.116839 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56398221-45fd-44e2-a725-ba2dae4f476f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.415000 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.417453 4869 generic.go:334] "Generic (PLEG): container finished" podID="56398221-45fd-44e2-a725-ba2dae4f476f" containerID="ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0" exitCode=0 Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.417536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerDied","Data":"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0"} Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.417574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bdhf" event={"ID":"56398221-45fd-44e2-a725-ba2dae4f476f","Type":"ContainerDied","Data":"57f6ab5c08888833bb04ccbd86e72aad303d5f65e03841deb046246c96155f87"} Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.417604 4869 scope.go:117] "RemoveContainer" containerID="ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.417821 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bdhf" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.428288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" event={"ID":"69005533-e9a5-4d50-912f-70adb7debd05","Type":"ContainerStarted","Data":"95ab78ee6e8c45b68e30228c7810ff5bf814be0d632e03b7b85b99c2c6a6294f"} Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.457443 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xrxbl" podStartSLOduration=2.371971593 podStartE2EDuration="5.457390773s" podCreationTimestamp="2026-03-14 09:12:26 +0000 UTC" firstStartedPulling="2026-03-14 09:12:27.390863473 +0000 UTC m=+900.363145526" lastFinishedPulling="2026-03-14 09:12:30.476282643 +0000 UTC m=+903.448564706" observedRunningTime="2026-03-14 09:12:31.449188491 +0000 UTC m=+904.421470564" watchObservedRunningTime="2026-03-14 09:12:31.457390773 +0000 UTC m=+904.429672826" Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.476939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.482779 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7bdhf"] Mar 14 09:12:31 crc kubenswrapper[4869]: I0314 09:12:31.719545 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" path="/var/lib/kubelet/pods/56398221-45fd-44e2-a725-ba2dae4f476f/volumes" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.161669 4869 scope.go:117] "RemoveContainer" containerID="77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.207366 4869 scope.go:117] "RemoveContainer" containerID="1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.232001 4869 scope.go:117] "RemoveContainer" containerID="ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0" Mar 14 09:12:32 crc kubenswrapper[4869]: E0314 09:12:32.232944 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0\": container with ID starting with ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0 not found: ID does not exist" containerID="ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.232979 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0"} err="failed to get container status \"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0\": rpc error: code = NotFound desc = could not find container \"ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0\": container with ID starting with ff7c3262561cf7ef5c69b94c40b4a8c112450fdb3473e30e29bb9053249205b0 not found: ID does not exist" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.233009 4869 scope.go:117] "RemoveContainer" containerID="77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083" Mar 14 09:12:32 crc kubenswrapper[4869]: E0314 09:12:32.233608 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083\": container with ID starting with 77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083 not found: ID does not exist" containerID="77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.233683 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083"} err="failed to get container status \"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083\": rpc error: code = NotFound desc = could not find container \"77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083\": container with ID starting with 77e58410eb8f492992f0b3cfc55a10d96d6367cf097fa19a9e2f47dacc26a083 not found: ID does not exist" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.233737 4869 scope.go:117] "RemoveContainer" containerID="1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235" Mar 14 09:12:32 crc kubenswrapper[4869]: E0314 09:12:32.236230 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235\": container with ID starting with 1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235 not found: ID does not exist" containerID="1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235" Mar 14 09:12:32 crc kubenswrapper[4869]: I0314 09:12:32.236270 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235"} err="failed to get container status \"1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235\": rpc error: code = NotFound desc = could not find container \"1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235\": container with ID starting with 1c6f3874033b9495924f41425db33ee7a9d4bee223a62731e1086a554cbff235 not found: ID does not exist" Mar 14 09:12:33 crc kubenswrapper[4869]: I0314 09:12:33.444852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" event={"ID":"7def1104-dc9a-43ed-9c74-744352ed80cb","Type":"ContainerStarted","Data":"1e6f4574f754b5a654c6be00f29c705f36c7bebe0ce174d690d24cf40b6899b9"} Mar 14 09:12:36 crc kubenswrapper[4869]: I0314 09:12:36.449527 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-965cd" Mar 14 09:12:36 crc kubenswrapper[4869]: I0314 09:12:36.476713 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-pjm8s" podStartSLOduration=5.641586556 podStartE2EDuration="11.476684416s" podCreationTimestamp="2026-03-14 09:12:25 +0000 UTC" firstStartedPulling="2026-03-14 09:12:26.659690596 +0000 UTC m=+899.631972649" lastFinishedPulling="2026-03-14 09:12:32.494788456 +0000 UTC m=+905.467070509" observedRunningTime="2026-03-14 09:12:33.471835775 +0000 UTC m=+906.444117828" watchObservedRunningTime="2026-03-14 09:12:36.476684416 +0000 UTC m=+909.448966479" Mar 14 09:12:36 crc kubenswrapper[4869]: I0314 09:12:36.631013 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:36 crc kubenswrapper[4869]: I0314 09:12:36.631095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:36 crc kubenswrapper[4869]: I0314 09:12:36.639573 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:37 crc kubenswrapper[4869]: I0314 09:12:37.489037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-579b8cb5c4-fhh55" Mar 14 09:12:37 crc kubenswrapper[4869]: I0314 09:12:37.580720 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:12:46 crc kubenswrapper[4869]: I0314 09:12:46.405736 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-244rz" Mar 14 09:12:51 crc kubenswrapper[4869]: I0314 09:12:51.868672 4869 scope.go:117] "RemoveContainer" containerID="6a802fd31527e618e7d4de122a40304ff7b29d7bd5c99411bf5fb9b60dbb4601" Mar 14 09:13:02 crc kubenswrapper[4869]: I0314 09:13:02.627526 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-plgzk" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" containerID="cri-o://45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3" gracePeriod=15 Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.054438 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-plgzk_14eab3cd-227a-4e8a-8bf1-f78ee852637c/console/0.log" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.054917 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203012 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203118 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203153 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203225 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b622w\" (UniqueName: \"kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.203259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle\") pod \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\" (UID: \"14eab3cd-227a-4e8a-8bf1-f78ee852637c\") " Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.204096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.204123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config" (OuterVolumeSpecName: "console-config") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.204107 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca" (OuterVolumeSpecName: "service-ca") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.204176 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.211605 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.211712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w" (OuterVolumeSpecName: "kube-api-access-b622w") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "kube-api-access-b622w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.215071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "14eab3cd-227a-4e8a-8bf1-f78ee852637c" (UID: "14eab3cd-227a-4e8a-8bf1-f78ee852637c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304661 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b622w\" (UniqueName: \"kubernetes.io/projected/14eab3cd-227a-4e8a-8bf1-f78ee852637c-kube-api-access-b622w\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304699 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304708 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304716 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304724 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304733 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-service-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.304742 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14eab3cd-227a-4e8a-8bf1-f78ee852637c-console-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.711743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-plgzk_14eab3cd-227a-4e8a-8bf1-f78ee852637c/console/0.log" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.712008 4869 generic.go:334] "Generic (PLEG): container finished" podID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerID="45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3" exitCode=2 Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.712049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-plgzk" event={"ID":"14eab3cd-227a-4e8a-8bf1-f78ee852637c","Type":"ContainerDied","Data":"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3"} Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.712087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-plgzk" event={"ID":"14eab3cd-227a-4e8a-8bf1-f78ee852637c","Type":"ContainerDied","Data":"f94c107e3a50e49507e4df30cc2c547dd004003dd07aad6152357cb45f33f281"} Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.712103 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-plgzk" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.712111 4869 scope.go:117] "RemoveContainer" containerID="45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.735826 4869 scope.go:117] "RemoveContainer" containerID="45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3" Mar 14 09:13:03 crc kubenswrapper[4869]: E0314 09:13:03.741634 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3\": container with ID starting with 45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3 not found: ID does not exist" containerID="45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.741683 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3"} err="failed to get container status \"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3\": rpc error: code = NotFound desc = could not find container \"45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3\": container with ID starting with 45acbf9fc3f036a7fcdf55767d68027804d913f168751a829c15820896a42ae3 not found: ID does not exist" Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.748537 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:13:03 crc kubenswrapper[4869]: I0314 09:13:03.755591 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-plgzk"] Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.921618 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts"] Mar 14 09:13:04 crc kubenswrapper[4869]: E0314 09:13:04.922311 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="extract-content" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922332 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="extract-content" Mar 14 09:13:04 crc kubenswrapper[4869]: E0314 09:13:04.922351 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922361 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" Mar 14 09:13:04 crc kubenswrapper[4869]: E0314 09:13:04.922375 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="extract-utilities" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922387 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="extract-utilities" Mar 14 09:13:04 crc kubenswrapper[4869]: E0314 09:13:04.922407 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="registry-server" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922418 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="registry-server" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922605 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" containerName="console" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.922628 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56398221-45fd-44e2-a725-ba2dae4f476f" containerName="registry-server" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.923895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.925990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.926098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lst89\" (UniqueName: \"kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.926140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.927069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 14 09:13:04 crc kubenswrapper[4869]: I0314 09:13:04.932699 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts"] Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.027020 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.027104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.027156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lst89\" (UniqueName: \"kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.027707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.027725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.062395 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lst89\" (UniqueName: \"kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.245441 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.462075 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts"] Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.712221 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14eab3cd-227a-4e8a-8bf1-f78ee852637c" path="/var/lib/kubelet/pods/14eab3cd-227a-4e8a-8bf1-f78ee852637c/volumes" Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.727216 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerStarted","Data":"d335cb209f9d8a767d68109a9e71ea34c301454e223df53fa2114080c72d017a"} Mar 14 09:13:05 crc kubenswrapper[4869]: I0314 09:13:05.727265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerStarted","Data":"a0e1da9b1418166bd16a86b16fb8a909bf0cba4c6d472f6e540553d86317ca83"} Mar 14 09:13:06 crc kubenswrapper[4869]: I0314 09:13:06.737722 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerID="d335cb209f9d8a767d68109a9e71ea34c301454e223df53fa2114080c72d017a" exitCode=0 Mar 14 09:13:06 crc kubenswrapper[4869]: I0314 09:13:06.737791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerDied","Data":"d335cb209f9d8a767d68109a9e71ea34c301454e223df53fa2114080c72d017a"} Mar 14 09:13:06 crc kubenswrapper[4869]: I0314 09:13:06.740535 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:13:09 crc kubenswrapper[4869]: I0314 09:13:09.759728 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerID="26f360f80840f3bd1a804b668b82f745da3ccefeb93c3059cf614bcf5cde9f12" exitCode=0 Mar 14 09:13:09 crc kubenswrapper[4869]: I0314 09:13:09.760326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerDied","Data":"26f360f80840f3bd1a804b668b82f745da3ccefeb93c3059cf614bcf5cde9f12"} Mar 14 09:13:10 crc kubenswrapper[4869]: I0314 09:13:10.769079 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerID="fdcdb31ad3de875c59671e49a2ac6fe49a1831607a99fadac7b177c0b274207e" exitCode=0 Mar 14 09:13:10 crc kubenswrapper[4869]: I0314 09:13:10.769158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerDied","Data":"fdcdb31ad3de875c59671e49a2ac6fe49a1831607a99fadac7b177c0b274207e"} Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.061665 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.234945 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle\") pod \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.235090 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lst89\" (UniqueName: \"kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89\") pod \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.235331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util\") pod \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\" (UID: \"3ddf1a82-4f87-475c-895b-23cfe6ed443c\") " Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.237434 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle" (OuterVolumeSpecName: "bundle") pod "3ddf1a82-4f87-475c-895b-23cfe6ed443c" (UID: "3ddf1a82-4f87-475c-895b-23cfe6ed443c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.245735 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89" (OuterVolumeSpecName: "kube-api-access-lst89") pod "3ddf1a82-4f87-475c-895b-23cfe6ed443c" (UID: "3ddf1a82-4f87-475c-895b-23cfe6ed443c"). InnerVolumeSpecName "kube-api-access-lst89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.258951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util" (OuterVolumeSpecName: "util") pod "3ddf1a82-4f87-475c-895b-23cfe6ed443c" (UID: "3ddf1a82-4f87-475c-895b-23cfe6ed443c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.337081 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.337129 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lst89\" (UniqueName: \"kubernetes.io/projected/3ddf1a82-4f87-475c-895b-23cfe6ed443c-kube-api-access-lst89\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.337151 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ddf1a82-4f87-475c-895b-23cfe6ed443c-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.783163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" event={"ID":"3ddf1a82-4f87-475c-895b-23cfe6ed443c","Type":"ContainerDied","Data":"a0e1da9b1418166bd16a86b16fb8a909bf0cba4c6d472f6e540553d86317ca83"} Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.783207 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e1da9b1418166bd16a86b16fb8a909bf0cba4c6d472f6e540553d86317ca83" Mar 14 09:13:12 crc kubenswrapper[4869]: I0314 09:13:12.783268 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.055071 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:21 crc kubenswrapper[4869]: E0314 09:13:21.056292 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="extract" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.056310 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="extract" Mar 14 09:13:21 crc kubenswrapper[4869]: E0314 09:13:21.056325 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="pull" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.056331 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="pull" Mar 14 09:13:21 crc kubenswrapper[4869]: E0314 09:13:21.056339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="util" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.056346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="util" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.056464 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ddf1a82-4f87-475c-895b-23cfe6ed443c" containerName="extract" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.057444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.072377 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.184998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.185053 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l76p\" (UniqueName: \"kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.185126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.286130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l76p\" (UniqueName: \"kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.286263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.286295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.286842 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.286924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.308653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l76p\" (UniqueName: \"kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p\") pod \"community-operators-kpw2q\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.374826 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.637227 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.842184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerStarted","Data":"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9"} Mar 14 09:13:21 crc kubenswrapper[4869]: I0314 09:13:21.842232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerStarted","Data":"c46eb8e96315e64481b52139b131ea4ddba9ff47c11e262262b996f3acae7dae"} Mar 14 09:13:22 crc kubenswrapper[4869]: I0314 09:13:22.850452 4869 generic.go:334] "Generic (PLEG): container finished" podID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerID="d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9" exitCode=0 Mar 14 09:13:22 crc kubenswrapper[4869]: I0314 09:13:22.850564 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerDied","Data":"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9"} Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.655807 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km"] Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.656887 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.658926 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.659009 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qhv5h" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.659271 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.660845 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.660992 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.718085 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km"] Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.728138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-webhook-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.728210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbx6k\" (UniqueName: \"kubernetes.io/projected/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-kube-api-access-hbx6k\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.728400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-apiservice-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.830041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbx6k\" (UniqueName: \"kubernetes.io/projected/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-kube-api-access-hbx6k\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.830142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-apiservice-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.830189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-webhook-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.841961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-webhook-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.851252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-apiservice-cert\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.853221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbx6k\" (UniqueName: \"kubernetes.io/projected/9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0-kube-api-access-hbx6k\") pod \"metallb-operator-controller-manager-c96b4b56d-sg8km\" (UID: \"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0\") " pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.871484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerStarted","Data":"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1"} Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.942268 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5"] Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.943288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.946249 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.947925 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ph2pm" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.948018 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.961238 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5"] Mar 14 09:13:23 crc kubenswrapper[4869]: I0314 09:13:23.974561 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.032952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjkl\" (UniqueName: \"kubernetes.io/projected/1585557f-13cc-49e6-8360-ab13426bbeb8-kube-api-access-5qjkl\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.033083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-webhook-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.033156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-apiservice-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.134859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-webhook-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.135296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-apiservice-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.135347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjkl\" (UniqueName: \"kubernetes.io/projected/1585557f-13cc-49e6-8360-ab13426bbeb8-kube-api-access-5qjkl\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.145763 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-webhook-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.145846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1585557f-13cc-49e6-8360-ab13426bbeb8-apiservice-cert\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.156730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjkl\" (UniqueName: \"kubernetes.io/projected/1585557f-13cc-49e6-8360-ab13426bbeb8-kube-api-access-5qjkl\") pod \"metallb-operator-webhook-server-74b96dc575-jpcc5\" (UID: \"1585557f-13cc-49e6-8360-ab13426bbeb8\") " pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.265035 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.511875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km"] Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.642926 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5"] Mar 14 09:13:24 crc kubenswrapper[4869]: W0314 09:13:24.648090 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1585557f_13cc_49e6_8360_ab13426bbeb8.slice/crio-004e7c6a5e3d062b8fbf66481fa9018a660a5314f0f2b095cf664ca34b37557a WatchSource:0}: Error finding container 004e7c6a5e3d062b8fbf66481fa9018a660a5314f0f2b095cf664ca34b37557a: Status 404 returned error can't find the container with id 004e7c6a5e3d062b8fbf66481fa9018a660a5314f0f2b095cf664ca34b37557a Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.880273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" event={"ID":"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0","Type":"ContainerStarted","Data":"1b8f4029a3c024bd8908a47da2977fac72903369700a6fb97c256be916230130"} Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.882598 4869 generic.go:334] "Generic (PLEG): container finished" podID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerID="485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1" exitCode=0 Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.882631 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerDied","Data":"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1"} Mar 14 09:13:24 crc kubenswrapper[4869]: I0314 09:13:24.884871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" event={"ID":"1585557f-13cc-49e6-8360-ab13426bbeb8","Type":"ContainerStarted","Data":"004e7c6a5e3d062b8fbf66481fa9018a660a5314f0f2b095cf664ca34b37557a"} Mar 14 09:13:25 crc kubenswrapper[4869]: I0314 09:13:25.896810 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerStarted","Data":"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6"} Mar 14 09:13:27 crc kubenswrapper[4869]: I0314 09:13:27.756713 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kpw2q" podStartSLOduration=4.342030596 podStartE2EDuration="6.756689313s" podCreationTimestamp="2026-03-14 09:13:21 +0000 UTC" firstStartedPulling="2026-03-14 09:13:22.852056557 +0000 UTC m=+955.824338630" lastFinishedPulling="2026-03-14 09:13:25.266715294 +0000 UTC m=+958.238997347" observedRunningTime="2026-03-14 09:13:25.915271532 +0000 UTC m=+958.887553605" watchObservedRunningTime="2026-03-14 09:13:27.756689313 +0000 UTC m=+960.728971366" Mar 14 09:13:28 crc kubenswrapper[4869]: I0314 09:13:28.921045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" event={"ID":"9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0","Type":"ContainerStarted","Data":"245ab13322ca248084daea8b2a22d361a55885f29240583bf110b0948fc6f9b5"} Mar 14 09:13:28 crc kubenswrapper[4869]: I0314 09:13:28.923249 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:13:30 crc kubenswrapper[4869]: I0314 09:13:30.972121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" event={"ID":"1585557f-13cc-49e6-8360-ab13426bbeb8","Type":"ContainerStarted","Data":"3ba0ed2c2f7ff9e755869eabeb9ac91315b6c1e35e1c9c213810b50f58140b80"} Mar 14 09:13:30 crc kubenswrapper[4869]: I0314 09:13:30.973397 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:31 crc kubenswrapper[4869]: I0314 09:13:31.000329 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" podStartSLOduration=4.441137668 podStartE2EDuration="8.00029622s" podCreationTimestamp="2026-03-14 09:13:23 +0000 UTC" firstStartedPulling="2026-03-14 09:13:24.524236019 +0000 UTC m=+957.496518072" lastFinishedPulling="2026-03-14 09:13:28.083394571 +0000 UTC m=+961.055676624" observedRunningTime="2026-03-14 09:13:28.951670928 +0000 UTC m=+961.923953001" watchObservedRunningTime="2026-03-14 09:13:31.00029622 +0000 UTC m=+963.972578313" Mar 14 09:13:31 crc kubenswrapper[4869]: I0314 09:13:31.004230 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" podStartSLOduration=2.293687047 podStartE2EDuration="8.004218226s" podCreationTimestamp="2026-03-14 09:13:23 +0000 UTC" firstStartedPulling="2026-03-14 09:13:24.651591199 +0000 UTC m=+957.623873252" lastFinishedPulling="2026-03-14 09:13:30.362122358 +0000 UTC m=+963.334404431" observedRunningTime="2026-03-14 09:13:30.99457996 +0000 UTC m=+963.966862013" watchObservedRunningTime="2026-03-14 09:13:31.004218226 +0000 UTC m=+963.976500319" Mar 14 09:13:31 crc kubenswrapper[4869]: I0314 09:13:31.375429 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:31 crc kubenswrapper[4869]: I0314 09:13:31.375543 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:31 crc kubenswrapper[4869]: I0314 09:13:31.427804 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:32 crc kubenswrapper[4869]: I0314 09:13:32.024838 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:32 crc kubenswrapper[4869]: I0314 09:13:32.078591 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:33 crc kubenswrapper[4869]: I0314 09:13:33.991679 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kpw2q" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="registry-server" containerID="cri-o://08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6" gracePeriod=2 Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.084741 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.086860 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.095433 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.188435 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2lh\" (UniqueName: \"kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.189070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.189107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.290082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2lh\" (UniqueName: \"kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.291497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.301946 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.302332 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.302747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.366880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2lh\" (UniqueName: \"kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh\") pod \"certified-operators-hq7kd\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.479745 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.480321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.606357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities\") pod \"431b4e49-f6e2-49c2-ada7-305620d8caab\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.606788 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l76p\" (UniqueName: \"kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p\") pod \"431b4e49-f6e2-49c2-ada7-305620d8caab\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.606925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content\") pod \"431b4e49-f6e2-49c2-ada7-305620d8caab\" (UID: \"431b4e49-f6e2-49c2-ada7-305620d8caab\") " Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.608734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities" (OuterVolumeSpecName: "utilities") pod "431b4e49-f6e2-49c2-ada7-305620d8caab" (UID: "431b4e49-f6e2-49c2-ada7-305620d8caab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.613433 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p" (OuterVolumeSpecName: "kube-api-access-2l76p") pod "431b4e49-f6e2-49c2-ada7-305620d8caab" (UID: "431b4e49-f6e2-49c2-ada7-305620d8caab"). InnerVolumeSpecName "kube-api-access-2l76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.708914 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.708955 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l76p\" (UniqueName: \"kubernetes.io/projected/431b4e49-f6e2-49c2-ada7-305620d8caab-kube-api-access-2l76p\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.954057 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:34 crc kubenswrapper[4869]: W0314 09:13:34.955915 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3058e17e_a888_49bc_98e0_951bc589aa7a.slice/crio-07c52532330202794d3aa023f8ba5582c8b8947c34a85e01d430cd7de1f5540c WatchSource:0}: Error finding container 07c52532330202794d3aa023f8ba5582c8b8947c34a85e01d430cd7de1f5540c: Status 404 returned error can't find the container with id 07c52532330202794d3aa023f8ba5582c8b8947c34a85e01d430cd7de1f5540c Mar 14 09:13:34 crc kubenswrapper[4869]: I0314 09:13:34.998561 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerStarted","Data":"07c52532330202794d3aa023f8ba5582c8b8947c34a85e01d430cd7de1f5540c"} Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:34.999966 4869 generic.go:334] "Generic (PLEG): container finished" podID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerID="08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6" exitCode=0 Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.000001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerDied","Data":"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6"} Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.000027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpw2q" event={"ID":"431b4e49-f6e2-49c2-ada7-305620d8caab","Type":"ContainerDied","Data":"c46eb8e96315e64481b52139b131ea4ddba9ff47c11e262262b996f3acae7dae"} Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.000042 4869 scope.go:117] "RemoveContainer" containerID="08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.000154 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpw2q" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.024115 4869 scope.go:117] "RemoveContainer" containerID="485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.062834 4869 scope.go:117] "RemoveContainer" containerID="d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.087873 4869 scope.go:117] "RemoveContainer" containerID="08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6" Mar 14 09:13:35 crc kubenswrapper[4869]: E0314 09:13:35.088293 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6\": container with ID starting with 08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6 not found: ID does not exist" containerID="08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.088320 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6"} err="failed to get container status \"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6\": rpc error: code = NotFound desc = could not find container \"08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6\": container with ID starting with 08b32d17a30a2dc6929ffcd16f2bb25536c6d726befe0f2daefa8b2e10f365a6 not found: ID does not exist" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.088343 4869 scope.go:117] "RemoveContainer" containerID="485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1" Mar 14 09:13:35 crc kubenswrapper[4869]: E0314 09:13:35.088545 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1\": container with ID starting with 485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1 not found: ID does not exist" containerID="485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.088568 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1"} err="failed to get container status \"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1\": rpc error: code = NotFound desc = could not find container \"485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1\": container with ID starting with 485c71f18798b892bcdbacb5e5f6d4b22a9d046eee589bf221f564df6ac4c1b1 not found: ID does not exist" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.088581 4869 scope.go:117] "RemoveContainer" containerID="d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9" Mar 14 09:13:35 crc kubenswrapper[4869]: E0314 09:13:35.088757 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9\": container with ID starting with d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9 not found: ID does not exist" containerID="d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.088777 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9"} err="failed to get container status \"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9\": rpc error: code = NotFound desc = could not find container \"d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9\": container with ID starting with d1aff339934d1c8b08b01a582599b8170727fd25644aa1a2b127268eb5f38dc9 not found: ID does not exist" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.367381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "431b4e49-f6e2-49c2-ada7-305620d8caab" (UID: "431b4e49-f6e2-49c2-ada7-305620d8caab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.416890 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/431b4e49-f6e2-49c2-ada7-305620d8caab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.627168 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.631951 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kpw2q"] Mar 14 09:13:35 crc kubenswrapper[4869]: I0314 09:13:35.711813 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" path="/var/lib/kubelet/pods/431b4e49-f6e2-49c2-ada7-305620d8caab/volumes" Mar 14 09:13:37 crc kubenswrapper[4869]: I0314 09:13:37.016620 4869 generic.go:334] "Generic (PLEG): container finished" podID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerID="49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09" exitCode=0 Mar 14 09:13:37 crc kubenswrapper[4869]: I0314 09:13:37.016737 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerDied","Data":"49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09"} Mar 14 09:13:38 crc kubenswrapper[4869]: I0314 09:13:38.027017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerStarted","Data":"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d"} Mar 14 09:13:39 crc kubenswrapper[4869]: I0314 09:13:39.038466 4869 generic.go:334] "Generic (PLEG): container finished" podID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerID="17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d" exitCode=0 Mar 14 09:13:39 crc kubenswrapper[4869]: I0314 09:13:39.038630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerDied","Data":"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d"} Mar 14 09:13:39 crc kubenswrapper[4869]: I0314 09:13:39.605368 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:13:39 crc kubenswrapper[4869]: I0314 09:13:39.605439 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:13:40 crc kubenswrapper[4869]: I0314 09:13:40.046378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerStarted","Data":"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5"} Mar 14 09:13:44 crc kubenswrapper[4869]: I0314 09:13:44.273010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-74b96dc575-jpcc5" Mar 14 09:13:44 crc kubenswrapper[4869]: I0314 09:13:44.296682 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hq7kd" podStartSLOduration=7.64906464 podStartE2EDuration="10.296654241s" podCreationTimestamp="2026-03-14 09:13:34 +0000 UTC" firstStartedPulling="2026-03-14 09:13:37.018783307 +0000 UTC m=+969.991065360" lastFinishedPulling="2026-03-14 09:13:39.666372908 +0000 UTC m=+972.638654961" observedRunningTime="2026-03-14 09:13:40.070837828 +0000 UTC m=+973.043119901" watchObservedRunningTime="2026-03-14 09:13:44.296654241 +0000 UTC m=+977.268936294" Mar 14 09:13:44 crc kubenswrapper[4869]: I0314 09:13:44.481190 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:44 crc kubenswrapper[4869]: I0314 09:13:44.481243 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:44 crc kubenswrapper[4869]: I0314 09:13:44.523406 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:45 crc kubenswrapper[4869]: I0314 09:13:45.149888 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:46 crc kubenswrapper[4869]: I0314 09:13:46.861206 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.106708 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hq7kd" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="registry-server" containerID="cri-o://e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5" gracePeriod=2 Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.630061 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.685431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities\") pod \"3058e17e-a888-49bc-98e0-951bc589aa7a\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.685575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2lh\" (UniqueName: \"kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh\") pod \"3058e17e-a888-49bc-98e0-951bc589aa7a\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.685609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content\") pod \"3058e17e-a888-49bc-98e0-951bc589aa7a\" (UID: \"3058e17e-a888-49bc-98e0-951bc589aa7a\") " Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.687173 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities" (OuterVolumeSpecName: "utilities") pod "3058e17e-a888-49bc-98e0-951bc589aa7a" (UID: "3058e17e-a888-49bc-98e0-951bc589aa7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.694773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh" (OuterVolumeSpecName: "kube-api-access-zk2lh") pod "3058e17e-a888-49bc-98e0-951bc589aa7a" (UID: "3058e17e-a888-49bc-98e0-951bc589aa7a"). InnerVolumeSpecName "kube-api-access-zk2lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.762728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3058e17e-a888-49bc-98e0-951bc589aa7a" (UID: "3058e17e-a888-49bc-98e0-951bc589aa7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.788088 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2lh\" (UniqueName: \"kubernetes.io/projected/3058e17e-a888-49bc-98e0-951bc589aa7a-kube-api-access-zk2lh\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.788156 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:47 crc kubenswrapper[4869]: I0314 09:13:47.788171 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3058e17e-a888-49bc-98e0-951bc589aa7a-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.117197 4869 generic.go:334] "Generic (PLEG): container finished" podID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerID="e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5" exitCode=0 Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.117253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerDied","Data":"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5"} Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.117282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7kd" event={"ID":"3058e17e-a888-49bc-98e0-951bc589aa7a","Type":"ContainerDied","Data":"07c52532330202794d3aa023f8ba5582c8b8947c34a85e01d430cd7de1f5540c"} Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.117304 4869 scope.go:117] "RemoveContainer" containerID="e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.117464 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7kd" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.155131 4869 scope.go:117] "RemoveContainer" containerID="17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.163502 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.163616 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hq7kd"] Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.175719 4869 scope.go:117] "RemoveContainer" containerID="49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.199645 4869 scope.go:117] "RemoveContainer" containerID="e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5" Mar 14 09:13:48 crc kubenswrapper[4869]: E0314 09:13:48.200226 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5\": container with ID starting with e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5 not found: ID does not exist" containerID="e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.200256 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5"} err="failed to get container status \"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5\": rpc error: code = NotFound desc = could not find container \"e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5\": container with ID starting with e69c62c4e99c71de57da82b0e8c9f944bf8ee496e99e85c65cb063eb9929bce5 not found: ID does not exist" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.200278 4869 scope.go:117] "RemoveContainer" containerID="17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d" Mar 14 09:13:48 crc kubenswrapper[4869]: E0314 09:13:48.200553 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d\": container with ID starting with 17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d not found: ID does not exist" containerID="17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.200574 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d"} err="failed to get container status \"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d\": rpc error: code = NotFound desc = could not find container \"17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d\": container with ID starting with 17b0a8f516160434ee4922055953c29039140bb0774fa2d3fd2b21a84103145d not found: ID does not exist" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.200584 4869 scope.go:117] "RemoveContainer" containerID="49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09" Mar 14 09:13:48 crc kubenswrapper[4869]: E0314 09:13:48.200847 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09\": container with ID starting with 49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09 not found: ID does not exist" containerID="49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09" Mar 14 09:13:48 crc kubenswrapper[4869]: I0314 09:13:48.200904 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09"} err="failed to get container status \"49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09\": rpc error: code = NotFound desc = could not find container \"49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09\": container with ID starting with 49b41b43291d400143355d07ea50b3411ee0e7197ee2b395bc262a29b297af09 not found: ID does not exist" Mar 14 09:13:49 crc kubenswrapper[4869]: I0314 09:13:49.711298 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" path="/var/lib/kubelet/pods/3058e17e-a888-49bc-98e0-951bc589aa7a/volumes" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.135647 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557994-7hfhb"] Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136471 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="extract-utilities" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136489 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="extract-utilities" Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136503 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="extract-content" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136546 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="extract-content" Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136562 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136569 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136583 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136590 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136597 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="extract-content" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136604 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="extract-content" Mar 14 09:14:00 crc kubenswrapper[4869]: E0314 09:14:00.136619 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="extract-utilities" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136626 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="extract-utilities" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136746 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="431b4e49-f6e2-49c2-ada7-305620d8caab" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.136764 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3058e17e-a888-49bc-98e0-951bc589aa7a" containerName="registry-server" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.137261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.139654 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.139884 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.140296 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.144804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-7hfhb"] Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.268816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pv9d\" (UniqueName: \"kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d\") pod \"auto-csr-approver-29557994-7hfhb\" (UID: \"75c2f58d-4863-43b4-b4ec-d839270ade42\") " pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.370419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pv9d\" (UniqueName: \"kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d\") pod \"auto-csr-approver-29557994-7hfhb\" (UID: \"75c2f58d-4863-43b4-b4ec-d839270ade42\") " pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.391288 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pv9d\" (UniqueName: \"kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d\") pod \"auto-csr-approver-29557994-7hfhb\" (UID: \"75c2f58d-4863-43b4-b4ec-d839270ade42\") " pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.468614 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:00 crc kubenswrapper[4869]: I0314 09:14:00.928003 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-7hfhb"] Mar 14 09:14:01 crc kubenswrapper[4869]: I0314 09:14:01.205847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" event={"ID":"75c2f58d-4863-43b4-b4ec-d839270ade42","Type":"ContainerStarted","Data":"9533a217bfe0676acef42fa551445ff3649740670fb31dbf3735d70b9ecbd427"} Mar 14 09:14:02 crc kubenswrapper[4869]: I0314 09:14:02.217831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" event={"ID":"75c2f58d-4863-43b4-b4ec-d839270ade42","Type":"ContainerStarted","Data":"f80eb8714d645107731349ccd3eb7bf1625a24d510600e59922765603d4dcabe"} Mar 14 09:14:02 crc kubenswrapper[4869]: I0314 09:14:02.240766 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" podStartSLOduration=1.276737295 podStartE2EDuration="2.240747574s" podCreationTimestamp="2026-03-14 09:14:00 +0000 UTC" firstStartedPulling="2026-03-14 09:14:00.942101334 +0000 UTC m=+993.914383387" lastFinishedPulling="2026-03-14 09:14:01.906111603 +0000 UTC m=+994.878393666" observedRunningTime="2026-03-14 09:14:02.238438216 +0000 UTC m=+995.210720279" watchObservedRunningTime="2026-03-14 09:14:02.240747574 +0000 UTC m=+995.213029627" Mar 14 09:14:03 crc kubenswrapper[4869]: I0314 09:14:03.235361 4869 generic.go:334] "Generic (PLEG): container finished" podID="75c2f58d-4863-43b4-b4ec-d839270ade42" containerID="f80eb8714d645107731349ccd3eb7bf1625a24d510600e59922765603d4dcabe" exitCode=0 Mar 14 09:14:03 crc kubenswrapper[4869]: I0314 09:14:03.235488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" event={"ID":"75c2f58d-4863-43b4-b4ec-d839270ade42","Type":"ContainerDied","Data":"f80eb8714d645107731349ccd3eb7bf1625a24d510600e59922765603d4dcabe"} Mar 14 09:14:03 crc kubenswrapper[4869]: I0314 09:14:03.978098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-c96b4b56d-sg8km" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.570580 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.633636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pv9d\" (UniqueName: \"kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d\") pod \"75c2f58d-4863-43b4-b4ec-d839270ade42\" (UID: \"75c2f58d-4863-43b4-b4ec-d839270ade42\") " Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.648305 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d" (OuterVolumeSpecName: "kube-api-access-2pv9d") pod "75c2f58d-4863-43b4-b4ec-d839270ade42" (UID: "75c2f58d-4863-43b4-b4ec-d839270ade42"). InnerVolumeSpecName "kube-api-access-2pv9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.727308 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck"] Mar 14 09:14:04 crc kubenswrapper[4869]: E0314 09:14:04.729352 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c2f58d-4863-43b4-b4ec-d839270ade42" containerName="oc" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.729378 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c2f58d-4863-43b4-b4ec-d839270ade42" containerName="oc" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.729538 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c2f58d-4863-43b4-b4ec-d839270ade42" containerName="oc" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.730148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.732544 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lkd7r" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.732880 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.734762 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pv9d\" (UniqueName: \"kubernetes.io/projected/75c2f58d-4863-43b4-b4ec-d839270ade42-kube-api-access-2pv9d\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.742648 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck"] Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.746581 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4sspm"] Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.748903 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.751304 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.751540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.823802 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-px22f"] Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.825661 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-px22f" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.828011 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.828288 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.828580 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-reloader\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-metrics\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835626 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwr76\" (UniqueName: \"kubernetes.io/projected/c1bf896c-b7f5-4ee8-a8f3-531729f11481-kube-api-access-cwr76\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-sockets\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-conf\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835683 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kl9lg" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b55cd623-5f55-4111-a671-e409e6c02697-frr-startup\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dpz\" (UniqueName: \"kubernetes.io/projected/b55cd623-5f55-4111-a671-e409e6c02697-kube-api-access-w8dpz\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835786 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.835809 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.844049 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-kctvk"] Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.845307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.848079 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.853145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-kctvk"] Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-conf\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b55cd623-5f55-4111-a671-e409e6c02697-frr-startup\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-cert\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dpz\" (UniqueName: \"kubernetes.io/projected/b55cd623-5f55-4111-a671-e409e6c02697-kube-api-access-w8dpz\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941689 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/194fa38d-a339-4883-bc71-3601aa7441b3-metallb-excludel2\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-reloader\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941754 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-metrics-certs\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-metrics\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzbp\" (UniqueName: \"kubernetes.io/projected/71df09af-93c6-48ff-b88b-cb91b0649482-kube-api-access-ndzbp\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vm5\" (UniqueName: \"kubernetes.io/projected/194fa38d-a339-4883-bc71-3601aa7441b3-kube-api-access-c8vm5\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941832 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwr76\" (UniqueName: \"kubernetes.io/projected/c1bf896c-b7f5-4ee8-a8f3-531729f11481-kube-api-access-cwr76\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-sockets\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.941995 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-conf\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.942072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-frr-sockets\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: E0314 09:14:04.942308 4869 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 14 09:14:04 crc kubenswrapper[4869]: E0314 09:14:04.942343 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert podName:c1bf896c-b7f5-4ee8-a8f3-531729f11481 nodeName:}" failed. No retries permitted until 2026-03-14 09:14:05.4423309 +0000 UTC m=+998.414612953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert") pod "frr-k8s-webhook-server-bcc4b6f68-b6tck" (UID: "c1bf896c-b7f5-4ee8-a8f3-531729f11481") : secret "frr-k8s-webhook-server-cert" not found Mar 14 09:14:04 crc kubenswrapper[4869]: E0314 09:14:04.942378 4869 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 14 09:14:04 crc kubenswrapper[4869]: E0314 09:14:04.942398 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs podName:b55cd623-5f55-4111-a671-e409e6c02697 nodeName:}" failed. No retries permitted until 2026-03-14 09:14:05.442389961 +0000 UTC m=+998.414672004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs") pod "frr-k8s-4sspm" (UID: "b55cd623-5f55-4111-a671-e409e6c02697") : secret "frr-k8s-certs-secret" not found Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.942592 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-reloader\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.942688 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b55cd623-5f55-4111-a671-e409e6c02697-frr-startup\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.942762 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b55cd623-5f55-4111-a671-e409e6c02697-metrics\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.976263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dpz\" (UniqueName: \"kubernetes.io/projected/b55cd623-5f55-4111-a671-e409e6c02697-kube-api-access-w8dpz\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:04 crc kubenswrapper[4869]: I0314 09:14:04.993647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwr76\" (UniqueName: \"kubernetes.io/projected/c1bf896c-b7f5-4ee8-a8f3-531729f11481-kube-api-access-cwr76\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-cert\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/194fa38d-a339-4883-bc71-3601aa7441b3-metallb-excludel2\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-metrics-certs\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042921 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzbp\" (UniqueName: \"kubernetes.io/projected/71df09af-93c6-48ff-b88b-cb91b0649482-kube-api-access-ndzbp\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.042942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vm5\" (UniqueName: \"kubernetes.io/projected/194fa38d-a339-4883-bc71-3601aa7441b3-kube-api-access-c8vm5\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.043924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/194fa38d-a339-4883-bc71-3601aa7441b3-metallb-excludel2\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.043991 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.044025 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist podName:194fa38d-a339-4883-bc71-3601aa7441b3 nodeName:}" failed. No retries permitted until 2026-03-14 09:14:05.544015294 +0000 UTC m=+998.516297337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist") pod "speaker-px22f" (UID: "194fa38d-a339-4883-bc71-3601aa7441b3") : secret "metallb-memberlist" not found Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.044166 4869 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.044195 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs podName:71df09af-93c6-48ff-b88b-cb91b0649482 nodeName:}" failed. No retries permitted until 2026-03-14 09:14:05.544188508 +0000 UTC m=+998.516470561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs") pod "controller-7bb4cc7c98-kctvk" (UID: "71df09af-93c6-48ff-b88b-cb91b0649482") : secret "controller-certs-secret" not found Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.047857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-metrics-certs\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.054963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.064547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vm5\" (UniqueName: \"kubernetes.io/projected/194fa38d-a339-4883-bc71-3601aa7441b3-kube-api-access-c8vm5\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.067943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-cert\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.081113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzbp\" (UniqueName: \"kubernetes.io/projected/71df09af-93c6-48ff-b88b-cb91b0649482-kube-api-access-ndzbp\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.252852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" event={"ID":"75c2f58d-4863-43b4-b4ec-d839270ade42","Type":"ContainerDied","Data":"9533a217bfe0676acef42fa551445ff3649740670fb31dbf3735d70b9ecbd427"} Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.252896 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557994-7hfhb" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.252899 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9533a217bfe0676acef42fa551445ff3649740670fb31dbf3735d70b9ecbd427" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.292079 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-9bhtd"] Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.295867 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557988-9bhtd"] Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.446698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.446754 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.450164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b55cd623-5f55-4111-a671-e409e6c02697-metrics-certs\") pod \"frr-k8s-4sspm\" (UID: \"b55cd623-5f55-4111-a671-e409e6c02697\") " pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.450368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c1bf896c-b7f5-4ee8-a8f3-531729f11481-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b6tck\" (UID: \"c1bf896c-b7f5-4ee8-a8f3-531729f11481\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.548027 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.548138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.548323 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 14 09:14:05 crc kubenswrapper[4869]: E0314 09:14:05.548424 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist podName:194fa38d-a339-4883-bc71-3601aa7441b3 nodeName:}" failed. No retries permitted until 2026-03-14 09:14:06.548400334 +0000 UTC m=+999.520682387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist") pod "speaker-px22f" (UID: "194fa38d-a339-4883-bc71-3601aa7441b3") : secret "metallb-memberlist" not found Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.553499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71df09af-93c6-48ff-b88b-cb91b0649482-metrics-certs\") pod \"controller-7bb4cc7c98-kctvk\" (UID: \"71df09af-93c6-48ff-b88b-cb91b0649482\") " pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.649444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.666852 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.733803 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c274eb1e-ca49-4363-9eab-6508b6268654" path="/var/lib/kubelet/pods/c274eb1e-ca49-4363-9eab-6508b6268654/volumes" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.759119 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:05 crc kubenswrapper[4869]: I0314 09:14:05.882166 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck"] Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.211611 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-kctvk"] Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.269614 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"e4820f3a820ef034cfb814e7468c4ba746871279d87120e6b47bcc02045fc298"} Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.270843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-kctvk" event={"ID":"71df09af-93c6-48ff-b88b-cb91b0649482","Type":"ContainerStarted","Data":"18e83b02b884e541010addcca7b1af18994d7911de4a11329013e74231cfa69c"} Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.271898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" event={"ID":"c1bf896c-b7f5-4ee8-a8f3-531729f11481","Type":"ContainerStarted","Data":"615130778b663559fa13c98e6a9a26068f6c7b2801fbccd76167e5e12dd0d35b"} Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.566623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.572196 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/194fa38d-a339-4883-bc71-3601aa7441b3-memberlist\") pod \"speaker-px22f\" (UID: \"194fa38d-a339-4883-bc71-3601aa7441b3\") " pod="metallb-system/speaker-px22f" Mar 14 09:14:06 crc kubenswrapper[4869]: I0314 09:14:06.642453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-px22f" Mar 14 09:14:06 crc kubenswrapper[4869]: W0314 09:14:06.668629 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod194fa38d_a339_4883_bc71_3601aa7441b3.slice/crio-568991ceba5392c83a9e0986139757b4a582ab993fb8776bc1288fb735c7d657 WatchSource:0}: Error finding container 568991ceba5392c83a9e0986139757b4a582ab993fb8776bc1288fb735c7d657: Status 404 returned error can't find the container with id 568991ceba5392c83a9e0986139757b4a582ab993fb8776bc1288fb735c7d657 Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.288389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-px22f" event={"ID":"194fa38d-a339-4883-bc71-3601aa7441b3","Type":"ContainerStarted","Data":"fa8aba4494d6c6771350765fe015d5d4dc67fea4ba0680d4d5bff4ff15657d78"} Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.288770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-px22f" event={"ID":"194fa38d-a339-4883-bc71-3601aa7441b3","Type":"ContainerStarted","Data":"080eda5f21aa9aae2b0d393a87d8c11523699e24d712bc15b3b0559c6ba2cf9b"} Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.288785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-px22f" event={"ID":"194fa38d-a339-4883-bc71-3601aa7441b3","Type":"ContainerStarted","Data":"568991ceba5392c83a9e0986139757b4a582ab993fb8776bc1288fb735c7d657"} Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.289159 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-px22f" Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.294302 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-kctvk" event={"ID":"71df09af-93c6-48ff-b88b-cb91b0649482","Type":"ContainerStarted","Data":"a1c02730eb27a5da1c9994c5fd4efbf8134bb7021e527240aa099eb291f680f7"} Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.294334 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-kctvk" event={"ID":"71df09af-93c6-48ff-b88b-cb91b0649482","Type":"ContainerStarted","Data":"815b56b7680974ef66efdabcf40f8977e5084a22858187577b2d2dd5966ca04b"} Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.294986 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.331438 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-px22f" podStartSLOduration=3.331419171 podStartE2EDuration="3.331419171s" podCreationTimestamp="2026-03-14 09:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:14:07.325897595 +0000 UTC m=+1000.298179668" watchObservedRunningTime="2026-03-14 09:14:07.331419171 +0000 UTC m=+1000.303701244" Mar 14 09:14:07 crc kubenswrapper[4869]: I0314 09:14:07.340668 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-kctvk" podStartSLOduration=3.340650728 podStartE2EDuration="3.340650728s" podCreationTimestamp="2026-03-14 09:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:14:07.338865474 +0000 UTC m=+1000.311147527" watchObservedRunningTime="2026-03-14 09:14:07.340650728 +0000 UTC m=+1000.312932781" Mar 14 09:14:09 crc kubenswrapper[4869]: I0314 09:14:09.605290 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:14:09 crc kubenswrapper[4869]: I0314 09:14:09.605697 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:14:13 crc kubenswrapper[4869]: I0314 09:14:13.333691 4869 generic.go:334] "Generic (PLEG): container finished" podID="b55cd623-5f55-4111-a671-e409e6c02697" containerID="701aedf43978ddebedb8a8cd78421b2ed66fe1d60efd67b5eda3a7f5951eaa7d" exitCode=0 Mar 14 09:14:13 crc kubenswrapper[4869]: I0314 09:14:13.333738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerDied","Data":"701aedf43978ddebedb8a8cd78421b2ed66fe1d60efd67b5eda3a7f5951eaa7d"} Mar 14 09:14:13 crc kubenswrapper[4869]: I0314 09:14:13.335526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" event={"ID":"c1bf896c-b7f5-4ee8-a8f3-531729f11481","Type":"ContainerStarted","Data":"e9d9ec5c0183747f54580e58f64772a8a9965a49cc012b90d1b3f5a3a8eeab37"} Mar 14 09:14:13 crc kubenswrapper[4869]: I0314 09:14:13.335659 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:13 crc kubenswrapper[4869]: I0314 09:14:13.374396 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" podStartSLOduration=2.199810276 podStartE2EDuration="9.374374599s" podCreationTimestamp="2026-03-14 09:14:04 +0000 UTC" firstStartedPulling="2026-03-14 09:14:05.895386409 +0000 UTC m=+998.867668482" lastFinishedPulling="2026-03-14 09:14:13.069950752 +0000 UTC m=+1006.042232805" observedRunningTime="2026-03-14 09:14:13.370914193 +0000 UTC m=+1006.343196266" watchObservedRunningTime="2026-03-14 09:14:13.374374599 +0000 UTC m=+1006.346656662" Mar 14 09:14:14 crc kubenswrapper[4869]: I0314 09:14:14.343145 4869 generic.go:334] "Generic (PLEG): container finished" podID="b55cd623-5f55-4111-a671-e409e6c02697" containerID="674b52fa54ba9f0bf22bd8f39c26ff804e467b23263ba6f4e0b4ed893eb6fd51" exitCode=0 Mar 14 09:14:14 crc kubenswrapper[4869]: I0314 09:14:14.343243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerDied","Data":"674b52fa54ba9f0bf22bd8f39c26ff804e467b23263ba6f4e0b4ed893eb6fd51"} Mar 14 09:14:15 crc kubenswrapper[4869]: I0314 09:14:15.354562 4869 generic.go:334] "Generic (PLEG): container finished" podID="b55cd623-5f55-4111-a671-e409e6c02697" containerID="5ce4a931f7e0b45ec2498b4634e78a2649c0d15e32ab76afe0e935649fb27b7f" exitCode=0 Mar 14 09:14:15 crc kubenswrapper[4869]: I0314 09:14:15.354618 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerDied","Data":"5ce4a931f7e0b45ec2498b4634e78a2649c0d15e32ab76afe0e935649fb27b7f"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.366128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"73e91f1149dd7ce1859a4c611063d1081039aad522495566d43b9993b8ec69fe"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.366458 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"02688a306010bd90b1149a19d5aa72ee897f3a4fb6d557e6cb5f76b58729bba5"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.366467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"7a1232a096d526d3e6a0f09cc5bd8bd0c5b500ef3e321de89375b5abd9ddf2db"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.366476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"74ae30b49d5cef73f2dc78ee4d5ddc157ce8c655ee7bb6d93be417e20fecde01"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.366485 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"16d394c283ffa4e00121da4dd9dd1e9d9703001aa18dba932801e44fa30c8974"} Mar 14 09:14:16 crc kubenswrapper[4869]: I0314 09:14:16.654357 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-px22f" Mar 14 09:14:17 crc kubenswrapper[4869]: I0314 09:14:17.379307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4sspm" event={"ID":"b55cd623-5f55-4111-a671-e409e6c02697","Type":"ContainerStarted","Data":"2c3af0d959c1774cebb1b627f715e290b090a0d94b95885a553eb29129aebd93"} Mar 14 09:14:17 crc kubenswrapper[4869]: I0314 09:14:17.380394 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:17 crc kubenswrapper[4869]: I0314 09:14:17.401640 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4sspm" podStartSLOduration=6.19231121 podStartE2EDuration="13.401622609s" podCreationTimestamp="2026-03-14 09:14:04 +0000 UTC" firstStartedPulling="2026-03-14 09:14:05.899949091 +0000 UTC m=+998.872231144" lastFinishedPulling="2026-03-14 09:14:13.10926048 +0000 UTC m=+1006.081542543" observedRunningTime="2026-03-14 09:14:17.399706712 +0000 UTC m=+1010.371988815" watchObservedRunningTime="2026-03-14 09:14:17.401622609 +0000 UTC m=+1010.373904682" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.332215 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.333777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.337336 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.337596 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.337840 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-66plw" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.355043 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.457285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmxw\" (UniqueName: \"kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw\") pod \"openstack-operator-index-69vvm\" (UID: \"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e\") " pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.559482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmxw\" (UniqueName: \"kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw\") pod \"openstack-operator-index-69vvm\" (UID: \"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e\") " pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.584172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmxw\" (UniqueName: \"kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw\") pod \"openstack-operator-index-69vvm\" (UID: \"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e\") " pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.665288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:19 crc kubenswrapper[4869]: I0314 09:14:19.908043 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:20 crc kubenswrapper[4869]: I0314 09:14:20.409037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69vvm" event={"ID":"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e","Type":"ContainerStarted","Data":"d2225d0486f75d77f06ff681c8f8b48888b135a3778aa7c85644f6ba88d6c231"} Mar 14 09:14:20 crc kubenswrapper[4869]: I0314 09:14:20.668354 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:20 crc kubenswrapper[4869]: I0314 09:14:20.709422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:22 crc kubenswrapper[4869]: I0314 09:14:22.669686 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.296035 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z7h7s"] Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.297101 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.364876 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z7h7s"] Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.417324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnlx\" (UniqueName: \"kubernetes.io/projected/a9d728d8-cd35-45aa-8d07-9b868dc8b137-kube-api-access-6bnlx\") pod \"openstack-operator-index-z7h7s\" (UID: \"a9d728d8-cd35-45aa-8d07-9b868dc8b137\") " pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.436347 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69vvm" event={"ID":"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e","Type":"ContainerStarted","Data":"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93"} Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.436532 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-69vvm" podUID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" containerName="registry-server" containerID="cri-o://4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93" gracePeriod=2 Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.456370 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-69vvm" podStartSLOduration=1.18954617 podStartE2EDuration="4.456354845s" podCreationTimestamp="2026-03-14 09:14:19 +0000 UTC" firstStartedPulling="2026-03-14 09:14:19.924444583 +0000 UTC m=+1012.896726626" lastFinishedPulling="2026-03-14 09:14:23.191253248 +0000 UTC m=+1016.163535301" observedRunningTime="2026-03-14 09:14:23.453743291 +0000 UTC m=+1016.426025364" watchObservedRunningTime="2026-03-14 09:14:23.456354845 +0000 UTC m=+1016.428636898" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.537624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnlx\" (UniqueName: \"kubernetes.io/projected/a9d728d8-cd35-45aa-8d07-9b868dc8b137-kube-api-access-6bnlx\") pod \"openstack-operator-index-z7h7s\" (UID: \"a9d728d8-cd35-45aa-8d07-9b868dc8b137\") " pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.568930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnlx\" (UniqueName: \"kubernetes.io/projected/a9d728d8-cd35-45aa-8d07-9b868dc8b137-kube-api-access-6bnlx\") pod \"openstack-operator-index-z7h7s\" (UID: \"a9d728d8-cd35-45aa-8d07-9b868dc8b137\") " pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.628677 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:23 crc kubenswrapper[4869]: I0314 09:14:23.873858 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.046679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmxw\" (UniqueName: \"kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw\") pod \"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e\" (UID: \"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e\") " Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.052613 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw" (OuterVolumeSpecName: "kube-api-access-fwmxw") pod "6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" (UID: "6e2b5e68-d313-4c9c-bfe2-124e3d90c02e"). InnerVolumeSpecName "kube-api-access-fwmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.092867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z7h7s"] Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.148266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwmxw\" (UniqueName: \"kubernetes.io/projected/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e-kube-api-access-fwmxw\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.446108 4869 generic.go:334] "Generic (PLEG): container finished" podID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" containerID="4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93" exitCode=0 Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.446165 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69vvm" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.446209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69vvm" event={"ID":"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e","Type":"ContainerDied","Data":"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93"} Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.446569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69vvm" event={"ID":"6e2b5e68-d313-4c9c-bfe2-124e3d90c02e","Type":"ContainerDied","Data":"d2225d0486f75d77f06ff681c8f8b48888b135a3778aa7c85644f6ba88d6c231"} Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.446593 4869 scope.go:117] "RemoveContainer" containerID="4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.453123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z7h7s" event={"ID":"a9d728d8-cd35-45aa-8d07-9b868dc8b137","Type":"ContainerStarted","Data":"f759ca30b6bc0492c4b9f85b34042c5df4f628bb34644fa780781b204b446c2f"} Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.453166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z7h7s" event={"ID":"a9d728d8-cd35-45aa-8d07-9b868dc8b137","Type":"ContainerStarted","Data":"7b7b860b5edae93573680f4ab0f14ddb1893d3d79f1716bba67975e1085a71e9"} Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.472110 4869 scope.go:117] "RemoveContainer" containerID="4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93" Mar 14 09:14:24 crc kubenswrapper[4869]: E0314 09:14:24.472550 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93\": container with ID starting with 4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93 not found: ID does not exist" containerID="4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.472582 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93"} err="failed to get container status \"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93\": rpc error: code = NotFound desc = could not find container \"4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93\": container with ID starting with 4cf318f6b685b7131b9eec003fc34df1310ceafe5cec022ffe71120c09590e93 not found: ID does not exist" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.497505 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z7h7s" podStartSLOduration=1.444689845 podStartE2EDuration="1.497480854s" podCreationTimestamp="2026-03-14 09:14:23 +0000 UTC" firstStartedPulling="2026-03-14 09:14:24.103866521 +0000 UTC m=+1017.076148584" lastFinishedPulling="2026-03-14 09:14:24.15665754 +0000 UTC m=+1017.128939593" observedRunningTime="2026-03-14 09:14:24.48070037 +0000 UTC m=+1017.452982433" watchObservedRunningTime="2026-03-14 09:14:24.497480854 +0000 UTC m=+1017.469762947" Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.500430 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:24 crc kubenswrapper[4869]: I0314 09:14:24.506456 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-69vvm"] Mar 14 09:14:25 crc kubenswrapper[4869]: I0314 09:14:25.654010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b6tck" Mar 14 09:14:25 crc kubenswrapper[4869]: I0314 09:14:25.670338 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4sspm" Mar 14 09:14:25 crc kubenswrapper[4869]: I0314 09:14:25.713756 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" path="/var/lib/kubelet/pods/6e2b5e68-d313-4c9c-bfe2-124e3d90c02e/volumes" Mar 14 09:14:25 crc kubenswrapper[4869]: I0314 09:14:25.763309 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-kctvk" Mar 14 09:14:33 crc kubenswrapper[4869]: I0314 09:14:33.629593 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:33 crc kubenswrapper[4869]: I0314 09:14:33.630240 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:33 crc kubenswrapper[4869]: I0314 09:14:33.673037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:34 crc kubenswrapper[4869]: I0314 09:14:34.580274 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z7h7s" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.490824 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:37 crc kubenswrapper[4869]: E0314 09:14:37.492604 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" containerName="registry-server" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.492624 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" containerName="registry-server" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.492766 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e2b5e68-d313-4c9c-bfe2-124e3d90c02e" containerName="registry-server" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.494436 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.501359 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.678551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.678629 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.678666 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.780331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.780525 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.780559 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.780863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.780877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.805095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq\") pod \"redhat-marketplace-q9m9f\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:37 crc kubenswrapper[4869]: I0314 09:14:37.814820 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:38 crc kubenswrapper[4869]: I0314 09:14:38.031010 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:38 crc kubenswrapper[4869]: W0314 09:14:38.037612 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a4ec2fe_ddde_40b2_8f06_6132db0ac95a.slice/crio-3c256f5c618b94ad95d6394860a2f7ff16ec0d82a417dfeeb458f0b628d0f841 WatchSource:0}: Error finding container 3c256f5c618b94ad95d6394860a2f7ff16ec0d82a417dfeeb458f0b628d0f841: Status 404 returned error can't find the container with id 3c256f5c618b94ad95d6394860a2f7ff16ec0d82a417dfeeb458f0b628d0f841 Mar 14 09:14:38 crc kubenswrapper[4869]: I0314 09:14:38.567248 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerID="1338b354a765864f7b0f4b5a5d7a41d1b3270481cc11fd19a9a62a5022ce336a" exitCode=0 Mar 14 09:14:38 crc kubenswrapper[4869]: I0314 09:14:38.567308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerDied","Data":"1338b354a765864f7b0f4b5a5d7a41d1b3270481cc11fd19a9a62a5022ce336a"} Mar 14 09:14:38 crc kubenswrapper[4869]: I0314 09:14:38.567500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerStarted","Data":"3c256f5c618b94ad95d6394860a2f7ff16ec0d82a417dfeeb458f0b628d0f841"} Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.605559 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.605931 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.606064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.606930 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.607014 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0" gracePeriod=600 Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.933221 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x"] Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.934867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.937628 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xf472" Mar 14 09:14:39 crc kubenswrapper[4869]: I0314 09:14:39.946060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x"] Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.112160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76tpz\" (UniqueName: \"kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.112634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.112703 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.214885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.214984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76tpz\" (UniqueName: \"kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.215025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.216195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.216391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.248550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76tpz\" (UniqueName: \"kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz\") pod \"62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.253283 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.490437 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x"] Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.582368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" event={"ID":"ad72d067-5a30-4464-8a54-bdc074e552ba","Type":"ContainerStarted","Data":"2e85c81e635e917d7a5f7a6a77c01dc04ede9a95b2a20f680198d037645b6415"} Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.586703 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0" exitCode=0 Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.586741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0"} Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.586794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f"} Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.586876 4869 scope.go:117] "RemoveContainer" containerID="f999968c5938eecacf43f0f074516b2654961d6ed5e7331aa8e5f6081cb0111c" Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.588355 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerID="47f411dd2f3998a4a28d8bd6c3bd3223e7cdee686ac4316bdbe5ce29752022ea" exitCode=0 Mar 14 09:14:40 crc kubenswrapper[4869]: I0314 09:14:40.588411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerDied","Data":"47f411dd2f3998a4a28d8bd6c3bd3223e7cdee686ac4316bdbe5ce29752022ea"} Mar 14 09:14:41 crc kubenswrapper[4869]: I0314 09:14:41.601940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerStarted","Data":"93ec5dff39f2d86e8e5862c23ce7235d8e691ba9c9ac545859f20596ac562802"} Mar 14 09:14:41 crc kubenswrapper[4869]: I0314 09:14:41.606278 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerID="d290849a6e73b6c1716d26848bc3f8628b9f5a612fe35b1145f9d1b52e0225bb" exitCode=0 Mar 14 09:14:41 crc kubenswrapper[4869]: I0314 09:14:41.606394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" event={"ID":"ad72d067-5a30-4464-8a54-bdc074e552ba","Type":"ContainerDied","Data":"d290849a6e73b6c1716d26848bc3f8628b9f5a612fe35b1145f9d1b52e0225bb"} Mar 14 09:14:41 crc kubenswrapper[4869]: I0314 09:14:41.671769 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9m9f" podStartSLOduration=2.094384342 podStartE2EDuration="4.67174329s" podCreationTimestamp="2026-03-14 09:14:37 +0000 UTC" firstStartedPulling="2026-03-14 09:14:38.568767359 +0000 UTC m=+1031.541049412" lastFinishedPulling="2026-03-14 09:14:41.146126307 +0000 UTC m=+1034.118408360" observedRunningTime="2026-03-14 09:14:41.637379263 +0000 UTC m=+1034.609661356" watchObservedRunningTime="2026-03-14 09:14:41.67174329 +0000 UTC m=+1034.644025363" Mar 14 09:14:42 crc kubenswrapper[4869]: I0314 09:14:42.621175 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerID="5b20fae53f387574777eaa63243633ff8e979cd7ee21b9af1358b097377f41f6" exitCode=0 Mar 14 09:14:42 crc kubenswrapper[4869]: I0314 09:14:42.621282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" event={"ID":"ad72d067-5a30-4464-8a54-bdc074e552ba","Type":"ContainerDied","Data":"5b20fae53f387574777eaa63243633ff8e979cd7ee21b9af1358b097377f41f6"} Mar 14 09:14:43 crc kubenswrapper[4869]: I0314 09:14:43.630995 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerID="74b0a2e3f94e0292054b2a6f72311fa046dc0f824f3a9a85beeac8a78fb9d907" exitCode=0 Mar 14 09:14:43 crc kubenswrapper[4869]: I0314 09:14:43.631169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" event={"ID":"ad72d067-5a30-4464-8a54-bdc074e552ba","Type":"ContainerDied","Data":"74b0a2e3f94e0292054b2a6f72311fa046dc0f824f3a9a85beeac8a78fb9d907"} Mar 14 09:14:44 crc kubenswrapper[4869]: I0314 09:14:44.958690 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.095596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle\") pod \"ad72d067-5a30-4464-8a54-bdc074e552ba\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.095789 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util\") pod \"ad72d067-5a30-4464-8a54-bdc074e552ba\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.095845 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76tpz\" (UniqueName: \"kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz\") pod \"ad72d067-5a30-4464-8a54-bdc074e552ba\" (UID: \"ad72d067-5a30-4464-8a54-bdc074e552ba\") " Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.097246 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle" (OuterVolumeSpecName: "bundle") pod "ad72d067-5a30-4464-8a54-bdc074e552ba" (UID: "ad72d067-5a30-4464-8a54-bdc074e552ba"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.105958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz" (OuterVolumeSpecName: "kube-api-access-76tpz") pod "ad72d067-5a30-4464-8a54-bdc074e552ba" (UID: "ad72d067-5a30-4464-8a54-bdc074e552ba"). InnerVolumeSpecName "kube-api-access-76tpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.109686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util" (OuterVolumeSpecName: "util") pod "ad72d067-5a30-4464-8a54-bdc074e552ba" (UID: "ad72d067-5a30-4464-8a54-bdc074e552ba"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.198286 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.198401 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad72d067-5a30-4464-8a54-bdc074e552ba-util\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.198434 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76tpz\" (UniqueName: \"kubernetes.io/projected/ad72d067-5a30-4464-8a54-bdc074e552ba-kube-api-access-76tpz\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.649182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" event={"ID":"ad72d067-5a30-4464-8a54-bdc074e552ba","Type":"ContainerDied","Data":"2e85c81e635e917d7a5f7a6a77c01dc04ede9a95b2a20f680198d037645b6415"} Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.649272 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e85c81e635e917d7a5f7a6a77c01dc04ede9a95b2a20f680198d037645b6415" Mar 14 09:14:45 crc kubenswrapper[4869]: I0314 09:14:45.649278 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x" Mar 14 09:14:47 crc kubenswrapper[4869]: I0314 09:14:47.816053 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:47 crc kubenswrapper[4869]: I0314 09:14:47.816721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:47 crc kubenswrapper[4869]: I0314 09:14:47.856403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.202844 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn"] Mar 14 09:14:48 crc kubenswrapper[4869]: E0314 09:14:48.203076 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="extract" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.203088 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="extract" Mar 14 09:14:48 crc kubenswrapper[4869]: E0314 09:14:48.203099 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="util" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.203105 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="util" Mar 14 09:14:48 crc kubenswrapper[4869]: E0314 09:14:48.203128 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="pull" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.203134 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="pull" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.203239 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad72d067-5a30-4464-8a54-bdc074e552ba" containerName="extract" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.203690 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.211581 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-px56l" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.235269 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn"] Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.348253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbmk\" (UniqueName: \"kubernetes.io/projected/580d9d1b-c740-4d28-b208-99a9ba7cd2ff-kube-api-access-4sbmk\") pod \"openstack-operator-controller-init-6ccbf6d758-dckvn\" (UID: \"580d9d1b-c740-4d28-b208-99a9ba7cd2ff\") " pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.448899 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbmk\" (UniqueName: \"kubernetes.io/projected/580d9d1b-c740-4d28-b208-99a9ba7cd2ff-kube-api-access-4sbmk\") pod \"openstack-operator-controller-init-6ccbf6d758-dckvn\" (UID: \"580d9d1b-c740-4d28-b208-99a9ba7cd2ff\") " pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.467283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbmk\" (UniqueName: \"kubernetes.io/projected/580d9d1b-c740-4d28-b208-99a9ba7cd2ff-kube-api-access-4sbmk\") pod \"openstack-operator-controller-init-6ccbf6d758-dckvn\" (UID: \"580d9d1b-c740-4d28-b208-99a9ba7cd2ff\") " pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.521003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.737669 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:48 crc kubenswrapper[4869]: I0314 09:14:48.981714 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn"] Mar 14 09:14:48 crc kubenswrapper[4869]: W0314 09:14:48.991391 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580d9d1b_c740_4d28_b208_99a9ba7cd2ff.slice/crio-e15fb9c34dae4402af81e31339b34f27a18e0755b538abe78bfb1cfd6c9f6c25 WatchSource:0}: Error finding container e15fb9c34dae4402af81e31339b34f27a18e0755b538abe78bfb1cfd6c9f6c25: Status 404 returned error can't find the container with id e15fb9c34dae4402af81e31339b34f27a18e0755b538abe78bfb1cfd6c9f6c25 Mar 14 09:14:49 crc kubenswrapper[4869]: I0314 09:14:49.742825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" event={"ID":"580d9d1b-c740-4d28-b208-99a9ba7cd2ff","Type":"ContainerStarted","Data":"e15fb9c34dae4402af81e31339b34f27a18e0755b538abe78bfb1cfd6c9f6c25"} Mar 14 09:14:50 crc kubenswrapper[4869]: I0314 09:14:50.276598 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:51 crc kubenswrapper[4869]: I0314 09:14:51.737584 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q9m9f" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="registry-server" containerID="cri-o://93ec5dff39f2d86e8e5862c23ce7235d8e691ba9c9ac545859f20596ac562802" gracePeriod=2 Mar 14 09:14:51 crc kubenswrapper[4869]: I0314 09:14:51.995201 4869 scope.go:117] "RemoveContainer" containerID="f4efcd7105b78b04fc7894ba7c222559706a70608d2cd0700012771fc3fe1b6f" Mar 14 09:14:53 crc kubenswrapper[4869]: I0314 09:14:53.768324 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerID="93ec5dff39f2d86e8e5862c23ce7235d8e691ba9c9ac545859f20596ac562802" exitCode=0 Mar 14 09:14:53 crc kubenswrapper[4869]: I0314 09:14:53.768468 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerDied","Data":"93ec5dff39f2d86e8e5862c23ce7235d8e691ba9c9ac545859f20596ac562802"} Mar 14 09:14:53 crc kubenswrapper[4869]: I0314 09:14:53.946167 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.031711 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq\") pod \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.031805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities\") pod \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.031892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content\") pod \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\" (UID: \"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a\") " Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.033498 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities" (OuterVolumeSpecName: "utilities") pod "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" (UID: "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.039946 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq" (OuterVolumeSpecName: "kube-api-access-p6qrq") pod "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" (UID: "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a"). InnerVolumeSpecName "kube-api-access-p6qrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.066004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" (UID: "2a4ec2fe-ddde-40b2-8f06-6132db0ac95a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.132726 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6qrq\" (UniqueName: \"kubernetes.io/projected/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-kube-api-access-p6qrq\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.132762 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.132773 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.784181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9m9f" event={"ID":"2a4ec2fe-ddde-40b2-8f06-6132db0ac95a","Type":"ContainerDied","Data":"3c256f5c618b94ad95d6394860a2f7ff16ec0d82a417dfeeb458f0b628d0f841"} Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.784297 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9m9f" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.784690 4869 scope.go:117] "RemoveContainer" containerID="93ec5dff39f2d86e8e5862c23ce7235d8e691ba9c9ac545859f20596ac562802" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.802119 4869 scope.go:117] "RemoveContainer" containerID="47f411dd2f3998a4a28d8bd6c3bd3223e7cdee686ac4316bdbe5ce29752022ea" Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.817309 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.821679 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9m9f"] Mar 14 09:14:54 crc kubenswrapper[4869]: I0314 09:14:54.841114 4869 scope.go:117] "RemoveContainer" containerID="1338b354a765864f7b0f4b5a5d7a41d1b3270481cc11fd19a9a62a5022ce336a" Mar 14 09:14:55 crc kubenswrapper[4869]: I0314 09:14:55.722561 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" path="/var/lib/kubelet/pods/2a4ec2fe-ddde-40b2-8f06-6132db0ac95a/volumes" Mar 14 09:14:55 crc kubenswrapper[4869]: I0314 09:14:55.817076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" event={"ID":"580d9d1b-c740-4d28-b208-99a9ba7cd2ff","Type":"ContainerStarted","Data":"2a5674f0993fc538ebc4e12343b6f78cf3ee52e50c7e108cab9ad17ae874556b"} Mar 14 09:14:55 crc kubenswrapper[4869]: I0314 09:14:55.817494 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:14:55 crc kubenswrapper[4869]: I0314 09:14:55.872370 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" podStartSLOduration=2.005758394 podStartE2EDuration="7.872349049s" podCreationTimestamp="2026-03-14 09:14:48 +0000 UTC" firstStartedPulling="2026-03-14 09:14:48.995489037 +0000 UTC m=+1041.967771090" lastFinishedPulling="2026-03-14 09:14:54.862079672 +0000 UTC m=+1047.834361745" observedRunningTime="2026-03-14 09:14:55.867218274 +0000 UTC m=+1048.839500337" watchObservedRunningTime="2026-03-14 09:14:55.872349049 +0000 UTC m=+1048.844631112" Mar 14 09:14:59 crc kubenswrapper[4869]: I0314 09:14:59.033839 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6ccbf6d758-dckvn" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.146936 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl"] Mar 14 09:15:00 crc kubenswrapper[4869]: E0314 09:15:00.147742 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.147768 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="extract-content" Mar 14 09:15:00 crc kubenswrapper[4869]: E0314 09:15:00.147819 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.147836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="extract-utilities" Mar 14 09:15:00 crc kubenswrapper[4869]: E0314 09:15:00.147862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.147905 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.148182 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4ec2fe-ddde-40b2-8f06-6132db0ac95a" containerName="registry-server" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.149160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.154578 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl"] Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.161167 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.182894 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.348444 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.348585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.348730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js79c\" (UniqueName: \"kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.449999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.450073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js79c\" (UniqueName: \"kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.450118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.451343 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.458543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.475420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js79c\" (UniqueName: \"kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c\") pod \"collect-profiles-29557995-2nmzl\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.483687 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:00 crc kubenswrapper[4869]: I0314 09:15:00.955472 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl"] Mar 14 09:15:01 crc kubenswrapper[4869]: I0314 09:15:01.052651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" event={"ID":"7538975c-1363-475d-a191-ec59a5810d40","Type":"ContainerStarted","Data":"f046decb4e71b702bc3908ee003a7950238e46ef1179630e2c6184759b828d8c"} Mar 14 09:15:02 crc kubenswrapper[4869]: I0314 09:15:02.064377 4869 generic.go:334] "Generic (PLEG): container finished" podID="7538975c-1363-475d-a191-ec59a5810d40" containerID="c98955ae2e1a961fbbd60a60ef4f9d9f4d8b24dfc7d10a7b9f787f1806da6372" exitCode=0 Mar 14 09:15:02 crc kubenswrapper[4869]: I0314 09:15:02.064458 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" event={"ID":"7538975c-1363-475d-a191-ec59a5810d40","Type":"ContainerDied","Data":"c98955ae2e1a961fbbd60a60ef4f9d9f4d8b24dfc7d10a7b9f787f1806da6372"} Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.378825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.393139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume\") pod \"7538975c-1363-475d-a191-ec59a5810d40\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.393333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume\") pod \"7538975c-1363-475d-a191-ec59a5810d40\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.393383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js79c\" (UniqueName: \"kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c\") pod \"7538975c-1363-475d-a191-ec59a5810d40\" (UID: \"7538975c-1363-475d-a191-ec59a5810d40\") " Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.394967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume" (OuterVolumeSpecName: "config-volume") pod "7538975c-1363-475d-a191-ec59a5810d40" (UID: "7538975c-1363-475d-a191-ec59a5810d40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.400633 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c" (OuterVolumeSpecName: "kube-api-access-js79c") pod "7538975c-1363-475d-a191-ec59a5810d40" (UID: "7538975c-1363-475d-a191-ec59a5810d40"). InnerVolumeSpecName "kube-api-access-js79c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.400829 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7538975c-1363-475d-a191-ec59a5810d40" (UID: "7538975c-1363-475d-a191-ec59a5810d40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.494556 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7538975c-1363-475d-a191-ec59a5810d40-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.494599 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js79c\" (UniqueName: \"kubernetes.io/projected/7538975c-1363-475d-a191-ec59a5810d40-kube-api-access-js79c\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:03 crc kubenswrapper[4869]: I0314 09:15:03.494610 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7538975c-1363-475d-a191-ec59a5810d40-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:15:04 crc kubenswrapper[4869]: I0314 09:15:04.085625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" event={"ID":"7538975c-1363-475d-a191-ec59a5810d40","Type":"ContainerDied","Data":"f046decb4e71b702bc3908ee003a7950238e46ef1179630e2c6184759b828d8c"} Mar 14 09:15:04 crc kubenswrapper[4869]: I0314 09:15:04.085665 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl" Mar 14 09:15:04 crc kubenswrapper[4869]: I0314 09:15:04.085683 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f046decb4e71b702bc3908ee003a7950238e46ef1179630e2c6184759b828d8c" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.107937 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg"] Mar 14 09:15:19 crc kubenswrapper[4869]: E0314 09:15:19.108802 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7538975c-1363-475d-a191-ec59a5810d40" containerName="collect-profiles" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.108819 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7538975c-1363-475d-a191-ec59a5810d40" containerName="collect-profiles" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.108999 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7538975c-1363-475d-a191-ec59a5810d40" containerName="collect-profiles" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.110452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.113613 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fslw2" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.119052 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.120317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.123645 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jqlpm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.126121 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.133536 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.149796 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.150756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.153886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tjq99" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.165470 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.167302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.171844 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-dlx25" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.202379 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.208582 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.220546 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.221429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.223632 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mkznl" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.249197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hp8\" (UniqueName: \"kubernetes.io/projected/f7e53cd1-216d-4b42-ad83-9d1098cc888b-kube-api-access-74hp8\") pod \"glance-operator-controller-manager-74d565fbd5-c5g8t\" (UID: \"f7e53cd1-216d-4b42-ad83-9d1098cc888b\") " pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.249271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxzrp\" (UniqueName: \"kubernetes.io/projected/3f340508-914a-4a30-8ba8-2fdafac3f865-kube-api-access-bxzrp\") pod \"cinder-operator-controller-manager-cb6d66846-g9rf5\" (UID: \"3f340508-914a-4a30-8ba8-2fdafac3f865\") " pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.249322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4d7s\" (UniqueName: \"kubernetes.io/projected/650e636f-cd1b-4f5b-814d-076980bd8141-kube-api-access-q4d7s\") pod \"barbican-operator-controller-manager-64768694d-fjdmg\" (UID: \"650e636f-cd1b-4f5b-814d-076980bd8141\") " pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.249340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvcv\" (UniqueName: \"kubernetes.io/projected/3259cee4-085a-4ba7-a3f3-117165a3b966-kube-api-access-cvvcv\") pod \"heat-operator-controller-manager-6d6bd468b-nwggm\" (UID: \"3259cee4-085a-4ba7-a3f3-117165a3b966\") " pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.259597 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.277150 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.278330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.281389 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wrm6x" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.292250 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.293498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.305496 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.306873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-r6jnw" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.306940 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.319667 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.342790 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.344027 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.348504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gbchj" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.350924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf4ws\" (UniqueName: \"kubernetes.io/projected/68b90df0-f51f-4365-b2e0-96731de5afe3-kube-api-access-pf4ws\") pod \"designate-operator-controller-manager-9c8c85cd7-5xpwd\" (UID: \"68b90df0-f51f-4365-b2e0-96731de5afe3\") " pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.351099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4d7s\" (UniqueName: \"kubernetes.io/projected/650e636f-cd1b-4f5b-814d-076980bd8141-kube-api-access-q4d7s\") pod \"barbican-operator-controller-manager-64768694d-fjdmg\" (UID: \"650e636f-cd1b-4f5b-814d-076980bd8141\") " pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.351143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvcv\" (UniqueName: \"kubernetes.io/projected/3259cee4-085a-4ba7-a3f3-117165a3b966-kube-api-access-cvvcv\") pod \"heat-operator-controller-manager-6d6bd468b-nwggm\" (UID: \"3259cee4-085a-4ba7-a3f3-117165a3b966\") " pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.351190 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74hp8\" (UniqueName: \"kubernetes.io/projected/f7e53cd1-216d-4b42-ad83-9d1098cc888b-kube-api-access-74hp8\") pod \"glance-operator-controller-manager-74d565fbd5-c5g8t\" (UID: \"f7e53cd1-216d-4b42-ad83-9d1098cc888b\") " pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.351273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxzrp\" (UniqueName: \"kubernetes.io/projected/3f340508-914a-4a30-8ba8-2fdafac3f865-kube-api-access-bxzrp\") pod \"cinder-operator-controller-manager-cb6d66846-g9rf5\" (UID: \"3f340508-914a-4a30-8ba8-2fdafac3f865\") " pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.374104 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.383609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4d7s\" (UniqueName: \"kubernetes.io/projected/650e636f-cd1b-4f5b-814d-076980bd8141-kube-api-access-q4d7s\") pod \"barbican-operator-controller-manager-64768694d-fjdmg\" (UID: \"650e636f-cd1b-4f5b-814d-076980bd8141\") " pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.386133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hp8\" (UniqueName: \"kubernetes.io/projected/f7e53cd1-216d-4b42-ad83-9d1098cc888b-kube-api-access-74hp8\") pod \"glance-operator-controller-manager-74d565fbd5-c5g8t\" (UID: \"f7e53cd1-216d-4b42-ad83-9d1098cc888b\") " pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.387168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxzrp\" (UniqueName: \"kubernetes.io/projected/3f340508-914a-4a30-8ba8-2fdafac3f865-kube-api-access-bxzrp\") pod \"cinder-operator-controller-manager-cb6d66846-g9rf5\" (UID: \"3f340508-914a-4a30-8ba8-2fdafac3f865\") " pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.389317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.390560 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.397998 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5l44g" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.415413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvcv\" (UniqueName: \"kubernetes.io/projected/3259cee4-085a-4ba7-a3f3-117165a3b966-kube-api-access-cvvcv\") pod \"heat-operator-controller-manager-6d6bd468b-nwggm\" (UID: \"3259cee4-085a-4ba7-a3f3-117165a3b966\") " pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.420570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.435803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.444914 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf4ws\" (UniqueName: \"kubernetes.io/projected/68b90df0-f51f-4365-b2e0-96731de5afe3-kube-api-access-pf4ws\") pod \"designate-operator-controller-manager-9c8c85cd7-5xpwd\" (UID: \"68b90df0-f51f-4365-b2e0-96731de5afe3\") " pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456619 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqpb\" (UniqueName: \"kubernetes.io/projected/3ea49362-1a35-4a1d-8bc4-1a34041ef967-kube-api-access-rxqpb\") pod \"ironic-operator-controller-manager-bf6b7fd8c-q966w\" (UID: \"3ea49362-1a35-4a1d-8bc4-1a34041ef967\") " pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456776 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhk7v\" (UniqueName: \"kubernetes.io/projected/ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c-kube-api-access-zhk7v\") pod \"keystone-operator-controller-manager-68f8d496f8-zj8hh\" (UID: \"ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c\") " pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xkg\" (UniqueName: \"kubernetes.io/projected/9ab0ae56-f1a8-473a-894f-00af6c8d174b-kube-api-access-v4xkg\") pod \"horizon-operator-controller-manager-5b9475cdd7-hb4t9\" (UID: \"9ab0ae56-f1a8-473a-894f-00af6c8d174b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.456840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxm74\" (UniqueName: \"kubernetes.io/projected/a0c504b4-c098-4ce0-930e-289770c5113f-kube-api-access-nxm74\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.465595 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.466896 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.491916 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zdxq5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.492681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.501362 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.520732 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.521472 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.530598 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6wlsv" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.560469 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572329 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqpb\" (UniqueName: \"kubernetes.io/projected/3ea49362-1a35-4a1d-8bc4-1a34041ef967-kube-api-access-rxqpb\") pod \"ironic-operator-controller-manager-bf6b7fd8c-q966w\" (UID: \"3ea49362-1a35-4a1d-8bc4-1a34041ef967\") " pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjtc\" (UniqueName: \"kubernetes.io/projected/2f14a802-394d-4f62-a2aa-f5a2595c520e-kube-api-access-ftjtc\") pod \"manila-operator-controller-manager-6f6f57b9b6-hd7c8\" (UID: \"2f14a802-394d-4f62-a2aa-f5a2595c520e\") " pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdhv\" (UniqueName: \"kubernetes.io/projected/848518af-f0df-41f4-b0b6-e38b2e1df95b-kube-api-access-8bdhv\") pod \"mariadb-operator-controller-manager-744456f686-bz5rc\" (UID: \"848518af-f0df-41f4-b0b6-e38b2e1df95b\") " pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhk7v\" (UniqueName: \"kubernetes.io/projected/ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c-kube-api-access-zhk7v\") pod \"keystone-operator-controller-manager-68f8d496f8-zj8hh\" (UID: \"ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c\") " pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xkg\" (UniqueName: \"kubernetes.io/projected/9ab0ae56-f1a8-473a-894f-00af6c8d174b-kube-api-access-v4xkg\") pod \"horizon-operator-controller-manager-5b9475cdd7-hb4t9\" (UID: \"9ab0ae56-f1a8-473a-894f-00af6c8d174b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.572496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxm74\" (UniqueName: \"kubernetes.io/projected/a0c504b4-c098-4ce0-930e-289770c5113f-kube-api-access-nxm74\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: E0314 09:15:19.575021 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:19 crc kubenswrapper[4869]: E0314 09:15:19.575100 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.075078199 +0000 UTC m=+1073.047360252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.576539 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf4ws\" (UniqueName: \"kubernetes.io/projected/68b90df0-f51f-4365-b2e0-96731de5afe3-kube-api-access-pf4ws\") pod \"designate-operator-controller-manager-9c8c85cd7-5xpwd\" (UID: \"68b90df0-f51f-4365-b2e0-96731de5afe3\") " pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.582307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.606450 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.608411 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.626765 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhk7v\" (UniqueName: \"kubernetes.io/projected/ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c-kube-api-access-zhk7v\") pod \"keystone-operator-controller-manager-68f8d496f8-zj8hh\" (UID: \"ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c\") " pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.633301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxm74\" (UniqueName: \"kubernetes.io/projected/a0c504b4-c098-4ce0-930e-289770c5113f-kube-api-access-nxm74\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.633704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqpb\" (UniqueName: \"kubernetes.io/projected/3ea49362-1a35-4a1d-8bc4-1a34041ef967-kube-api-access-rxqpb\") pod \"ironic-operator-controller-manager-bf6b7fd8c-q966w\" (UID: \"3ea49362-1a35-4a1d-8bc4-1a34041ef967\") " pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.634210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xkg\" (UniqueName: \"kubernetes.io/projected/9ab0ae56-f1a8-473a-894f-00af6c8d174b-kube-api-access-v4xkg\") pod \"horizon-operator-controller-manager-5b9475cdd7-hb4t9\" (UID: \"9ab0ae56-f1a8-473a-894f-00af6c8d174b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.643312 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-j9wlj" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.674348 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.677189 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.677710 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.687324 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.696965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zdj5m" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.697926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdhv\" (UniqueName: \"kubernetes.io/projected/848518af-f0df-41f4-b0b6-e38b2e1df95b-kube-api-access-8bdhv\") pod \"mariadb-operator-controller-manager-744456f686-bz5rc\" (UID: \"848518af-f0df-41f4-b0b6-e38b2e1df95b\") " pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.698028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjtc\" (UniqueName: \"kubernetes.io/projected/2f14a802-394d-4f62-a2aa-f5a2595c520e-kube-api-access-ftjtc\") pod \"manila-operator-controller-manager-6f6f57b9b6-hd7c8\" (UID: \"2f14a802-394d-4f62-a2aa-f5a2595c520e\") " pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.700639 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.760793 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.761886 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.762651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.762701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.776263 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-848d74f969-xt747"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.777237 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.778161 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.778558 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.779158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.782363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdhv\" (UniqueName: \"kubernetes.io/projected/848518af-f0df-41f4-b0b6-e38b2e1df95b-kube-api-access-8bdhv\") pod \"mariadb-operator-controller-manager-744456f686-bz5rc\" (UID: \"848518af-f0df-41f4-b0b6-e38b2e1df95b\") " pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.782938 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.782993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjtc\" (UniqueName: \"kubernetes.io/projected/2f14a802-394d-4f62-a2aa-f5a2595c520e-kube-api-access-ftjtc\") pod \"manila-operator-controller-manager-6f6f57b9b6-hd7c8\" (UID: \"2f14a802-394d-4f62-a2aa-f5a2595c520e\") " pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.783132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-74jts" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.784005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jsx4m" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.799291 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf9w\" (UniqueName: \"kubernetes.io/projected/779acd04-3c3b-4b59-8a41-b54250cfb2cb-kube-api-access-wqf9w\") pod \"neutron-operator-controller-manager-645c9f6488-p4vnd\" (UID: \"779acd04-3c3b-4b59-8a41-b54250cfb2cb\") " pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.799389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28f2b\" (UniqueName: \"kubernetes.io/projected/d521dfe5-1037-4df9-a34b-5996da959160-kube-api-access-28f2b\") pod \"nova-operator-controller-manager-58ff56fcc7-n9qfr\" (UID: \"d521dfe5-1037-4df9-a34b-5996da959160\") " pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.801321 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.801917 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z9sks" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.802201 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-v4h22" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.814289 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.825586 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-848d74f969-xt747"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.833203 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.867014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.879799 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9gl\" (UniqueName: \"kubernetes.io/projected/09c23762-07cd-45d1-97ce-dc91ffebacfc-kube-api-access-6c9gl\") pod \"octavia-operator-controller-manager-7cf9f49d6-6pr99\" (UID: \"09c23762-07cd-45d1-97ce-dc91ffebacfc\") " pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqf9w\" (UniqueName: \"kubernetes.io/projected/779acd04-3c3b-4b59-8a41-b54250cfb2cb-kube-api-access-wqf9w\") pod \"neutron-operator-controller-manager-645c9f6488-p4vnd\" (UID: \"779acd04-3c3b-4b59-8a41-b54250cfb2cb\") " pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8sm4\" (UniqueName: \"kubernetes.io/projected/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-kube-api-access-s8sm4\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7ks\" (UniqueName: \"kubernetes.io/projected/32265d81-a0fb-47e8-9cab-d88245cade72-kube-api-access-wf7ks\") pod \"placement-operator-controller-manager-b5c469fd-2hff7\" (UID: \"32265d81-a0fb-47e8-9cab-d88245cade72\") " pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28f2b\" (UniqueName: \"kubernetes.io/projected/d521dfe5-1037-4df9-a34b-5996da959160-kube-api-access-28f2b\") pod \"nova-operator-controller-manager-58ff56fcc7-n9qfr\" (UID: \"d521dfe5-1037-4df9-a34b-5996da959160\") " pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.900645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcf5\" (UniqueName: \"kubernetes.io/projected/6086aaa8-fd6f-4e48-bc77-1b5fad163e38-kube-api-access-qbcf5\") pod \"ovn-operator-controller-manager-848d74f969-xt747\" (UID: \"6086aaa8-fd6f-4e48-bc77-1b5fad163e38\") " pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.906997 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.926549 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.927465 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.945873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-js9mm" Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.971602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64"] Mar 14 09:15:19 crc kubenswrapper[4869]: I0314 09:15:19.993650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28f2b\" (UniqueName: \"kubernetes.io/projected/d521dfe5-1037-4df9-a34b-5996da959160-kube-api-access-28f2b\") pod \"nova-operator-controller-manager-58ff56fcc7-n9qfr\" (UID: \"d521dfe5-1037-4df9-a34b-5996da959160\") " pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.001698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbcf5\" (UniqueName: \"kubernetes.io/projected/6086aaa8-fd6f-4e48-bc77-1b5fad163e38-kube-api-access-qbcf5\") pod \"ovn-operator-controller-manager-848d74f969-xt747\" (UID: \"6086aaa8-fd6f-4e48-bc77-1b5fad163e38\") " pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.001802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9gl\" (UniqueName: \"kubernetes.io/projected/09c23762-07cd-45d1-97ce-dc91ffebacfc-kube-api-access-6c9gl\") pod \"octavia-operator-controller-manager-7cf9f49d6-6pr99\" (UID: \"09c23762-07cd-45d1-97ce-dc91ffebacfc\") " pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.001855 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.001893 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8sm4\" (UniqueName: \"kubernetes.io/projected/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-kube-api-access-s8sm4\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.001947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7ks\" (UniqueName: \"kubernetes.io/projected/32265d81-a0fb-47e8-9cab-d88245cade72-kube-api-access-wf7ks\") pod \"placement-operator-controller-manager-b5c469fd-2hff7\" (UID: \"32265d81-a0fb-47e8-9cab-d88245cade72\") " pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.003697 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.003782 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.503755725 +0000 UTC m=+1073.476037778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.031009 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.032001 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.034718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqf9w\" (UniqueName: \"kubernetes.io/projected/779acd04-3c3b-4b59-8a41-b54250cfb2cb-kube-api-access-wqf9w\") pod \"neutron-operator-controller-manager-645c9f6488-p4vnd\" (UID: \"779acd04-3c3b-4b59-8a41-b54250cfb2cb\") " pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.044427 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hwzlf" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.059102 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.062992 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.066863 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.068095 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.072221 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sktl9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.073005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7ks\" (UniqueName: \"kubernetes.io/projected/32265d81-a0fb-47e8-9cab-d88245cade72-kube-api-access-wf7ks\") pod \"placement-operator-controller-manager-b5c469fd-2hff7\" (UID: \"32265d81-a0fb-47e8-9cab-d88245cade72\") " pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.073141 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.073984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.080092 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.086051 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wksnl" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.098498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8sm4\" (UniqueName: \"kubernetes.io/projected/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-kube-api-access-s8sm4\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.102685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.102792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8hq\" (UniqueName: \"kubernetes.io/projected/12ffac0c-6749-4576-8bdf-f2eb432a6373-kube-api-access-vl8hq\") pod \"swift-operator-controller-manager-7f7469dbc6-msr64\" (UID: \"12ffac0c-6749-4576-8bdf-f2eb432a6373\") " pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.103035 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.103088 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:21.103071291 +0000 UTC m=+1074.075353344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.105642 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.106188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9gl\" (UniqueName: \"kubernetes.io/projected/09c23762-07cd-45d1-97ce-dc91ffebacfc-kube-api-access-6c9gl\") pod \"octavia-operator-controller-manager-7cf9f49d6-6pr99\" (UID: \"09c23762-07cd-45d1-97ce-dc91ffebacfc\") " pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.109083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbcf5\" (UniqueName: \"kubernetes.io/projected/6086aaa8-fd6f-4e48-bc77-1b5fad163e38-kube-api-access-qbcf5\") pod \"ovn-operator-controller-manager-848d74f969-xt747\" (UID: \"6086aaa8-fd6f-4e48-bc77-1b5fad163e38\") " pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.109668 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.146891 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.163751 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.173207 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.173370 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.175787 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.175966 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tdx2p" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.177038 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.201482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.209258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8cg\" (UniqueName: \"kubernetes.io/projected/3b368982-02ed-44bb-bba7-9e707d2e4fbf-kube-api-access-mw8cg\") pod \"watcher-operator-controller-manager-7cc8dbcb54-9rqrs\" (UID: \"3b368982-02ed-44bb-bba7-9e707d2e4fbf\") " pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.209318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm9j\" (UniqueName: \"kubernetes.io/projected/451a50a4-ee48-4f61-9c05-514ce3897ffa-kube-api-access-6fm9j\") pod \"telemetry-operator-controller-manager-6646df7cdb-7lbq5\" (UID: \"451a50a4-ee48-4f61-9c05-514ce3897ffa\") " pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.209530 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8hq\" (UniqueName: \"kubernetes.io/projected/12ffac0c-6749-4576-8bdf-f2eb432a6373-kube-api-access-vl8hq\") pod \"swift-operator-controller-manager-7f7469dbc6-msr64\" (UID: \"12ffac0c-6749-4576-8bdf-f2eb432a6373\") " pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.209597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdwd\" (UniqueName: \"kubernetes.io/projected/3961ac22-8919-4b7a-8b44-64c1c5d9e1be-kube-api-access-tfdwd\") pod \"test-operator-controller-manager-8467ccb4c8-gplwr\" (UID: \"3961ac22-8919-4b7a-8b44-64c1c5d9e1be\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.251342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8hq\" (UniqueName: \"kubernetes.io/projected/12ffac0c-6749-4576-8bdf-f2eb432a6373-kube-api-access-vl8hq\") pod \"swift-operator-controller-manager-7f7469dbc6-msr64\" (UID: \"12ffac0c-6749-4576-8bdf-f2eb432a6373\") " pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.288893 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.296124 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.300969 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jf7zn" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.301666 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316051 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6k7\" (UniqueName: \"kubernetes.io/projected/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-kube-api-access-lj6k7\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316114 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8cg\" (UniqueName: \"kubernetes.io/projected/3b368982-02ed-44bb-bba7-9e707d2e4fbf-kube-api-access-mw8cg\") pod \"watcher-operator-controller-manager-7cc8dbcb54-9rqrs\" (UID: \"3b368982-02ed-44bb-bba7-9e707d2e4fbf\") " pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fm9j\" (UniqueName: \"kubernetes.io/projected/451a50a4-ee48-4f61-9c05-514ce3897ffa-kube-api-access-6fm9j\") pod \"telemetry-operator-controller-manager-6646df7cdb-7lbq5\" (UID: \"451a50a4-ee48-4f61-9c05-514ce3897ffa\") " pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.316673 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdwd\" (UniqueName: \"kubernetes.io/projected/3961ac22-8919-4b7a-8b44-64c1c5d9e1be-kube-api-access-tfdwd\") pod \"test-operator-controller-manager-8467ccb4c8-gplwr\" (UID: \"3961ac22-8919-4b7a-8b44-64c1c5d9e1be\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.338124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdwd\" (UniqueName: \"kubernetes.io/projected/3961ac22-8919-4b7a-8b44-64c1c5d9e1be-kube-api-access-tfdwd\") pod \"test-operator-controller-manager-8467ccb4c8-gplwr\" (UID: \"3961ac22-8919-4b7a-8b44-64c1c5d9e1be\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.338415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8cg\" (UniqueName: \"kubernetes.io/projected/3b368982-02ed-44bb-bba7-9e707d2e4fbf-kube-api-access-mw8cg\") pod \"watcher-operator-controller-manager-7cc8dbcb54-9rqrs\" (UID: \"3b368982-02ed-44bb-bba7-9e707d2e4fbf\") " pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.338572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.340743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fm9j\" (UniqueName: \"kubernetes.io/projected/451a50a4-ee48-4f61-9c05-514ce3897ffa-kube-api-access-6fm9j\") pod \"telemetry-operator-controller-manager-6646df7cdb-7lbq5\" (UID: \"451a50a4-ee48-4f61-9c05-514ce3897ffa\") " pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.423606 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.423680 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnhg\" (UniqueName: \"kubernetes.io/projected/e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52-kube-api-access-fqnhg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjmd5\" (UID: \"e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.423754 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6k7\" (UniqueName: \"kubernetes.io/projected/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-kube-api-access-lj6k7\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.423767 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.423781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.423835 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.923814869 +0000 UTC m=+1073.896096922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.423951 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.424009 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:20.923988603 +0000 UTC m=+1073.896270656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.449431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6k7\" (UniqueName: \"kubernetes.io/projected/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-kube-api-access-lj6k7\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.470864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.498006 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.520591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.525326 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnhg\" (UniqueName: \"kubernetes.io/projected/e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52-kube-api-access-fqnhg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjmd5\" (UID: \"e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.525454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.525619 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.525679 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:21.525663057 +0000 UTC m=+1074.497945110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.542958 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.547322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnhg\" (UniqueName: \"kubernetes.io/projected/e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52-kube-api-access-fqnhg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zjmd5\" (UID: \"e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.555158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.680834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.930670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.930766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.930827 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.930894 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:21.930871995 +0000 UTC m=+1074.903154038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.930908 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: E0314 09:15:20.930954 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:21.930937897 +0000 UTC m=+1074.903219950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.932192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t"] Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.947438 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5"] Mar 14 09:15:20 crc kubenswrapper[4869]: W0314 09:15:20.948035 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f340508_914a_4a30_8ba8_2fdafac3f865.slice/crio-b77569ea81d67894b5f65c8260f9656ea5ac857834bfcdf88bdce0a5f941718a WatchSource:0}: Error finding container b77569ea81d67894b5f65c8260f9656ea5ac857834bfcdf88bdce0a5f941718a: Status 404 returned error can't find the container with id b77569ea81d67894b5f65c8260f9656ea5ac857834bfcdf88bdce0a5f941718a Mar 14 09:15:20 crc kubenswrapper[4869]: I0314 09:15:20.954033 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.133499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.133719 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.133775 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:23.133757431 +0000 UTC m=+1076.106039494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.172722 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.179251 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32265d81_a0fb_47e8_9cab_d88245cade72.slice/crio-537bdeacc5bb5e9068ac461a560d023441f1e0f27e3c4730d1b765584b444f66 WatchSource:0}: Error finding container 537bdeacc5bb5e9068ac461a560d023441f1e0f27e3c4730d1b765584b444f66: Status 404 returned error can't find the container with id 537bdeacc5bb5e9068ac461a560d023441f1e0f27e3c4730d1b765584b444f66 Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.181129 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.182105 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c23762_07cd_45d1_97ce_dc91ffebacfc.slice/crio-35ce498cbbc747a5aa261bcaa4e11221f1cd9838f71f5a6eb2dff9e4f1922a37 WatchSource:0}: Error finding container 35ce498cbbc747a5aa261bcaa4e11221f1cd9838f71f5a6eb2dff9e4f1922a37: Status 404 returned error can't find the container with id 35ce498cbbc747a5aa261bcaa4e11221f1cd9838f71f5a6eb2dff9e4f1922a37 Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.226138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" event={"ID":"32265d81-a0fb-47e8-9cab-d88245cade72","Type":"ContainerStarted","Data":"537bdeacc5bb5e9068ac461a560d023441f1e0f27e3c4730d1b765584b444f66"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.227232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" event={"ID":"09c23762-07cd-45d1-97ce-dc91ffebacfc","Type":"ContainerStarted","Data":"35ce498cbbc747a5aa261bcaa4e11221f1cd9838f71f5a6eb2dff9e4f1922a37"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.228062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" event={"ID":"650e636f-cd1b-4f5b-814d-076980bd8141","Type":"ContainerStarted","Data":"69b69af0553a1250c9730b544f8c8f90c2dc5ed7ccf6a8eacd6203e7470ecb20"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.228855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" event={"ID":"3259cee4-085a-4ba7-a3f3-117165a3b966","Type":"ContainerStarted","Data":"81662d6fd5301076c0cb4c212345675e2f3656b56e3393b179bca4834c928937"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.229894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" event={"ID":"f7e53cd1-216d-4b42-ad83-9d1098cc888b","Type":"ContainerStarted","Data":"489fcd2853f18d87576cf0bd7ddffb2a690452ab2697f7e21516d79623be4581"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.230658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" event={"ID":"3f340508-914a-4a30-8ba8-2fdafac3f865","Type":"ContainerStarted","Data":"b77569ea81d67894b5f65c8260f9656ea5ac857834bfcdf88bdce0a5f941718a"} Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.361328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.388681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.403938 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.446007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.446103 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ce5477_6d00_4d1b_a1c1_c244ac7e3c52.slice/crio-7bbe36a95c64839a9fad60689834ee200e89a73e26fd820c9670ec62e2bf9ba3 WatchSource:0}: Error finding container 7bbe36a95c64839a9fad60689834ee200e89a73e26fd820c9670ec62e2bf9ba3: Status 404 returned error can't find the container with id 7bbe36a95c64839a9fad60689834ee200e89a73e26fd820c9670ec62e2bf9ba3 Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.466123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.472730 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.541327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.541903 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.542082 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:23.542056756 +0000 UTC m=+1076.514338829 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.576797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.582632 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.586412 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3961ac22_8919_4b7a_8b44_64c1c5d9e1be.slice/crio-bba9433cc9e4895a242cd448af33323e8cb8455eeb00416c63018f698e2602f1 WatchSource:0}: Error finding container bba9433cc9e4895a242cd448af33323e8cb8455eeb00416c63018f698e2602f1: Status 404 returned error can't find the container with id bba9433cc9e4895a242cd448af33323e8cb8455eeb00416c63018f698e2602f1 Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.588718 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6086aaa8_fd6f_4e48_bc77_1b5fad163e38.slice/crio-c76a29e653df238fd2b9de1055541cb452dd92b76032152c02c3b99e610fdfd2 WatchSource:0}: Error finding container c76a29e653df238fd2b9de1055541cb452dd92b76032152c02c3b99e610fdfd2: Status 404 returned error can't find the container with id c76a29e653df238fd2b9de1055541cb452dd92b76032152c02c3b99e610fdfd2 Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.596076 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-848d74f969-xt747"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.597465 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451a50a4_ee48_4f61_9c05_514ce3897ffa.slice/crio-a3e2e63af039cab7e82cac79a3f21f464e59a34aabfb6e590275db14e9e15ed9 WatchSource:0}: Error finding container a3e2e63af039cab7e82cac79a3f21f464e59a34aabfb6e590275db14e9e15ed9: Status 404 returned error can't find the container with id a3e2e63af039cab7e82cac79a3f21f464e59a34aabfb6e590275db14e9e15ed9 Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.597951 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f14a802_394d_4f62_a2aa_f5a2595c520e.slice/crio-97c357e95d04b6de3be4d108b6786ac626fcf4dab773eacc01ac60060c6bc222 WatchSource:0}: Error finding container 97c357e95d04b6de3be4d108b6786ac626fcf4dab773eacc01ac60060c6bc222: Status 404 returned error can't find the container with id 97c357e95d04b6de3be4d108b6786ac626fcf4dab773eacc01ac60060c6bc222 Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.599744 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b368982_02ed_44bb_bba7_9e707d2e4fbf.slice/crio-023489903ba5d3ee24b75fbf599cf9431c6fde36e205ed42493901919bd3fc0b WatchSource:0}: Error finding container 023489903ba5d3ee24b75fbf599cf9431c6fde36e205ed42493901919bd3fc0b: Status 404 returned error can't find the container with id 023489903ba5d3ee24b75fbf599cf9431c6fde36e205ed42493901919bd3fc0b Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.602440 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.153:5001/openstack-k8s-operators/watcher-operator:3fad4a9eb56718f26ce2ec186bb570f2695f01c3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mw8cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cc8dbcb54-9rqrs_openstack-operators(3b368982-02ed-44bb-bba7-9e707d2e4fbf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.602454 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:52fef288c693a77f8cb78b5284261d0da532e9552c4aef21faf68426624ee165,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fm9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6646df7cdb-7lbq5_openstack-operators(451a50a4-ee48-4f61-9c05-514ce3897ffa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.603665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" podUID="451a50a4-ee48-4f61-9c05-514ce3897ffa" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.603742 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" podUID="3b368982-02ed-44bb-bba7-9e707d2e4fbf" Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.607861 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ffac0c_6749_4576_8bdf_f2eb432a6373.slice/crio-8742e48153101810b9d7f5c3863c860c0e0fc37b45476f04d9fa1b6c82969c70 WatchSource:0}: Error finding container 8742e48153101810b9d7f5c3863c860c0e0fc37b45476f04d9fa1b6c82969c70: Status 404 returned error can't find the container with id 8742e48153101810b9d7f5c3863c860c0e0fc37b45476f04d9fa1b6c82969c70 Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.608315 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:6e988fa8bacb3367dea2e02d28abf23403affdb604ca0353473264ec21051ff2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ftjtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6f6f57b9b6-hd7c8_openstack-operators(2f14a802-394d-4f62-a2aa-f5a2595c520e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.609605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" podUID="2f14a802-394d-4f62-a2aa-f5a2595c520e" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.612235 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:25b9550f0738285c05af02dda06d4ed9edb64e8200cd487dd8af29dea7717278,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vl8hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7f7469dbc6-msr64_openstack-operators(12ffac0c-6749-4576-8bdf-f2eb432a6373): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.615943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" podUID="12ffac0c-6749-4576-8bdf-f2eb432a6373" Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.623134 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.636704 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8"] Mar 14 09:15:21 crc kubenswrapper[4869]: W0314 09:15:21.643379 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ab0ae56_f1a8_473a_894f_00af6c8d174b.slice/crio-809b3f7644a57e5ec0994f386bccd3b19e215918e7cbba9a5dc0288651b7ee43 WatchSource:0}: Error finding container 809b3f7644a57e5ec0994f386bccd3b19e215918e7cbba9a5dc0288651b7ee43: Status 404 returned error can't find the container with id 809b3f7644a57e5ec0994f386bccd3b19e215918e7cbba9a5dc0288651b7ee43 Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.647689 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64"] Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.648653 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:a70ca136f44c6e6a2019ef73a813bdb97b2f7901a71f88591f3845750a554f88,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhk7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-68f8d496f8-zj8hh_openstack-operators(ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.649996 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" podUID="ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c" Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.653029 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs"] Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.653383 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:b15bc78181df64e701e7dd6fd70f6c26c2cbb20c2a9e3b1180a635b791d586bf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4xkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9475cdd7-hb4t9_openstack-operators(9ab0ae56-f1a8-473a-894f-00af6c8d174b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.654826 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" podUID="9ab0ae56-f1a8-473a-894f-00af6c8d174b" Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.659004 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9"] Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.947233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.947409 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: I0314 09:15:21.947650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.947680 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:23.947656294 +0000 UTC m=+1076.919938347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.947865 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:21 crc kubenswrapper[4869]: E0314 09:15:21.947953 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:23.947939511 +0000 UTC m=+1076.920221554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.242489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" event={"ID":"451a50a4-ee48-4f61-9c05-514ce3897ffa","Type":"ContainerStarted","Data":"a3e2e63af039cab7e82cac79a3f21f464e59a34aabfb6e590275db14e9e15ed9"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.244673 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" event={"ID":"3b368982-02ed-44bb-bba7-9e707d2e4fbf","Type":"ContainerStarted","Data":"023489903ba5d3ee24b75fbf599cf9431c6fde36e205ed42493901919bd3fc0b"} Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.245840 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/openstack-k8s-operators/watcher-operator:3fad4a9eb56718f26ce2ec186bb570f2695f01c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" podUID="3b368982-02ed-44bb-bba7-9e707d2e4fbf" Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.247069 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" event={"ID":"779acd04-3c3b-4b59-8a41-b54250cfb2cb","Type":"ContainerStarted","Data":"2a30033f6c0eddd3e718c5f92311abf5b06e5c8cfdb6a6ebad4f6c369f812ee6"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.249027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" event={"ID":"2f14a802-394d-4f62-a2aa-f5a2595c520e","Type":"ContainerStarted","Data":"97c357e95d04b6de3be4d108b6786ac626fcf4dab773eacc01ac60060c6bc222"} Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.249404 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:52fef288c693a77f8cb78b5284261d0da532e9552c4aef21faf68426624ee165\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" podUID="451a50a4-ee48-4f61-9c05-514ce3897ffa" Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.251193 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:6e988fa8bacb3367dea2e02d28abf23403affdb604ca0353473264ec21051ff2\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" podUID="2f14a802-394d-4f62-a2aa-f5a2595c520e" Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.252180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" event={"ID":"3ea49362-1a35-4a1d-8bc4-1a34041ef967","Type":"ContainerStarted","Data":"b805399f6711625ca924679df08970448f8e92056e874aacc4eb1f439b86d985"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.254682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" event={"ID":"9ab0ae56-f1a8-473a-894f-00af6c8d174b","Type":"ContainerStarted","Data":"809b3f7644a57e5ec0994f386bccd3b19e215918e7cbba9a5dc0288651b7ee43"} Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.255923 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b15bc78181df64e701e7dd6fd70f6c26c2cbb20c2a9e3b1180a635b791d586bf\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" podUID="9ab0ae56-f1a8-473a-894f-00af6c8d174b" Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.258100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" event={"ID":"3961ac22-8919-4b7a-8b44-64c1c5d9e1be","Type":"ContainerStarted","Data":"bba9433cc9e4895a242cd448af33323e8cb8455eeb00416c63018f698e2602f1"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.264613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" event={"ID":"d521dfe5-1037-4df9-a34b-5996da959160","Type":"ContainerStarted","Data":"c2a18118f7bae6bf3efd742b8f7ebb09ec0683152214539b9d392a36c7f6613f"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.267938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" event={"ID":"68b90df0-f51f-4365-b2e0-96731de5afe3","Type":"ContainerStarted","Data":"95b0774dba9a280473c93d1a9653d8fc5b8f4ae252564c75051b63e7e0b3e1b8"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.271506 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" event={"ID":"848518af-f0df-41f4-b0b6-e38b2e1df95b","Type":"ContainerStarted","Data":"47cdc0763f8c3fae5b62b122aeee991aa7a9d440515391cdff9c36ae8cadd491"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.275244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" event={"ID":"6086aaa8-fd6f-4e48-bc77-1b5fad163e38","Type":"ContainerStarted","Data":"c76a29e653df238fd2b9de1055541cb452dd92b76032152c02c3b99e610fdfd2"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.285943 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" event={"ID":"e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52","Type":"ContainerStarted","Data":"7bbe36a95c64839a9fad60689834ee200e89a73e26fd820c9670ec62e2bf9ba3"} Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.288366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" event={"ID":"12ffac0c-6749-4576-8bdf-f2eb432a6373","Type":"ContainerStarted","Data":"8742e48153101810b9d7f5c3863c860c0e0fc37b45476f04d9fa1b6c82969c70"} Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.289889 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:25b9550f0738285c05af02dda06d4ed9edb64e8200cd487dd8af29dea7717278\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" podUID="12ffac0c-6749-4576-8bdf-f2eb432a6373" Mar 14 09:15:22 crc kubenswrapper[4869]: I0314 09:15:22.292339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" event={"ID":"ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c","Type":"ContainerStarted","Data":"8baa0245832187a38bc095a564068cb28c3b5d05b0be986fc93b7aea40c19c88"} Mar 14 09:15:22 crc kubenswrapper[4869]: E0314 09:15:22.307371 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a70ca136f44c6e6a2019ef73a813bdb97b2f7901a71f88591f3845750a554f88\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" podUID="ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c" Mar 14 09:15:23 crc kubenswrapper[4869]: I0314 09:15:23.174151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.174239 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.174328 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:27.17430772 +0000 UTC m=+1080.146589823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.308374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a70ca136f44c6e6a2019ef73a813bdb97b2f7901a71f88591f3845750a554f88\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" podUID="ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.309366 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:6e988fa8bacb3367dea2e02d28abf23403affdb604ca0353473264ec21051ff2\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" podUID="2f14a802-394d-4f62-a2aa-f5a2595c520e" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.309498 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:b15bc78181df64e701e7dd6fd70f6c26c2cbb20c2a9e3b1180a635b791d586bf\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" podUID="9ab0ae56-f1a8-473a-894f-00af6c8d174b" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.309568 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:25b9550f0738285c05af02dda06d4ed9edb64e8200cd487dd8af29dea7717278\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" podUID="12ffac0c-6749-4576-8bdf-f2eb432a6373" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.309838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:52fef288c693a77f8cb78b5284261d0da532e9552c4aef21faf68426624ee165\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" podUID="451a50a4-ee48-4f61-9c05-514ce3897ffa" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.311548 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/openstack-k8s-operators/watcher-operator:3fad4a9eb56718f26ce2ec186bb570f2695f01c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" podUID="3b368982-02ed-44bb-bba7-9e707d2e4fbf" Mar 14 09:15:23 crc kubenswrapper[4869]: I0314 09:15:23.582417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.582720 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.582860 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:27.58282869 +0000 UTC m=+1080.555110743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: I0314 09:15:23.989885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:23 crc kubenswrapper[4869]: I0314 09:15:23.990025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.990230 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.990229 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.990290 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:27.990270373 +0000 UTC m=+1080.962552426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:23 crc kubenswrapper[4869]: E0314 09:15:23.990310 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:27.990302074 +0000 UTC m=+1080.962584227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:27 crc kubenswrapper[4869]: I0314 09:15:27.242706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:27 crc kubenswrapper[4869]: E0314 09:15:27.243418 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:27 crc kubenswrapper[4869]: E0314 09:15:27.243464 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:35.243450483 +0000 UTC m=+1088.215732536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:27 crc kubenswrapper[4869]: I0314 09:15:27.649033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:27 crc kubenswrapper[4869]: E0314 09:15:27.649252 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:27 crc kubenswrapper[4869]: E0314 09:15:27.649360 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:35.649326538 +0000 UTC m=+1088.621608591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:28 crc kubenswrapper[4869]: I0314 09:15:28.058303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:28 crc kubenswrapper[4869]: I0314 09:15:28.058438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:28 crc kubenswrapper[4869]: E0314 09:15:28.058648 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:28 crc kubenswrapper[4869]: E0314 09:15:28.058708 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:36.058689328 +0000 UTC m=+1089.030971391 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:28 crc kubenswrapper[4869]: E0314 09:15:28.059076 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:28 crc kubenswrapper[4869]: E0314 09:15:28.059109 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:36.059098048 +0000 UTC m=+1089.031380101 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:34 crc kubenswrapper[4869]: E0314 09:15:34.196990 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:e6b59eec0f3c9b3227b57f4c98704a37c688a662f49f22756ee8ba0674e81e86" Mar 14 09:15:34 crc kubenswrapper[4869]: E0314 09:15:34.197579 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:e6b59eec0f3c9b3227b57f4c98704a37c688a662f49f22756ee8ba0674e81e86,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q4d7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-64768694d-fjdmg_openstack-operators(650e636f-cd1b-4f5b-814d-076980bd8141): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:34 crc kubenswrapper[4869]: E0314 09:15:34.198846 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" podUID="650e636f-cd1b-4f5b-814d-076980bd8141" Mar 14 09:15:34 crc kubenswrapper[4869]: E0314 09:15:34.388198 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:e6b59eec0f3c9b3227b57f4c98704a37c688a662f49f22756ee8ba0674e81e86\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" podUID="650e636f-cd1b-4f5b-814d-076980bd8141" Mar 14 09:15:35 crc kubenswrapper[4869]: I0314 09:15:35.290113 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.290396 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.290571 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert podName:a0c504b4-c098-4ce0-930e-289770c5113f nodeName:}" failed. No retries permitted until 2026-03-14 09:15:51.290489941 +0000 UTC m=+1104.262772034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert") pod "infra-operator-controller-manager-fbfb5bd65-ncnch" (UID: "a0c504b4-c098-4ce0-930e-289770c5113f") : secret "infra-operator-webhook-server-cert" not found Mar 14 09:15:35 crc kubenswrapper[4869]: I0314 09:15:35.697008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.697793 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.697874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert podName:4afcee0e-ed99-4df2-b68d-ba86e8dedacc nodeName:}" failed. No retries permitted until 2026-03-14 09:15:51.697850832 +0000 UTC m=+1104.670132885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" (UID: "4afcee0e-ed99-4df2-b68d-ba86e8dedacc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.797168 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.797365 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqnhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zjmd5_openstack-operators(e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:35 crc kubenswrapper[4869]: E0314 09:15:35.798479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" podUID="e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52" Mar 14 09:15:36 crc kubenswrapper[4869]: I0314 09:15:36.102697 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:36 crc kubenswrapper[4869]: I0314 09:15:36.102798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.102972 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.103020 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:52.10300511 +0000 UTC m=+1105.075287163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "webhook-server-cert" not found Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.103060 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.103227 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs podName:4a5b98d8-17c9-4d94-a61a-2c500a234d2e nodeName:}" failed. No retries permitted until 2026-03-14 09:15:52.103187114 +0000 UTC m=+1105.075469227 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs") pod "openstack-operator-controller-manager-59b5586c67-f56l9" (UID: "4a5b98d8-17c9-4d94-a61a-2c500a234d2e") : secret "metrics-server-cert" not found Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.248746 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ff20b84a172c2bdeaab0111915b0d1ba99370534ebd720d6daf63153a7d7d59e" Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.249024 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ff20b84a172c2bdeaab0111915b0d1ba99370534ebd720d6daf63153a7d7d59e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28f2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-58ff56fcc7-n9qfr_openstack-operators(d521dfe5-1037-4df9-a34b-5996da959160): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.251257 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" podUID="d521dfe5-1037-4df9-a34b-5996da959160" Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.400676 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ff20b84a172c2bdeaab0111915b0d1ba99370534ebd720d6daf63153a7d7d59e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" podUID="d521dfe5-1037-4df9-a34b-5996da959160" Mar 14 09:15:36 crc kubenswrapper[4869]: E0314 09:15:36.402469 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" podUID="e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.417260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" event={"ID":"f7e53cd1-216d-4b42-ad83-9d1098cc888b","Type":"ContainerStarted","Data":"74f0c8af745f0e06e83ea49603d898f67416ba8e10fc9763b8e641e69b7f3ddf"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.417907 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.425875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" event={"ID":"6086aaa8-fd6f-4e48-bc77-1b5fad163e38","Type":"ContainerStarted","Data":"45a737ef1af4a792b49100140230b5c3a70ade6bc157727b63aa0b727c118692"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.425967 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.427548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" event={"ID":"3f340508-914a-4a30-8ba8-2fdafac3f865","Type":"ContainerStarted","Data":"6f2230df50c6469fc62f92379281eb3120ce16e3258913e001ee80a9d04ae179"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.427678 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.435992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" event={"ID":"32265d81-a0fb-47e8-9cab-d88245cade72","Type":"ContainerStarted","Data":"3e4830f207a2a4e4b5cf38cce8a92fa033c9528ba1f963ea8f689822db539b58"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.436260 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.439619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" event={"ID":"09c23762-07cd-45d1-97ce-dc91ffebacfc","Type":"ContainerStarted","Data":"1ecea7a50e561b01143850328a4b685ab4b37bb9cf375833eff16fe930cc1e43"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.439930 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.443165 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" podStartSLOduration=3.172633723 podStartE2EDuration="18.44314518s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:20.944665765 +0000 UTC m=+1073.916947818" lastFinishedPulling="2026-03-14 09:15:36.215177232 +0000 UTC m=+1089.187459275" observedRunningTime="2026-03-14 09:15:37.437319957 +0000 UTC m=+1090.409602010" watchObservedRunningTime="2026-03-14 09:15:37.44314518 +0000 UTC m=+1090.415427233" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.453421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" event={"ID":"3259cee4-085a-4ba7-a3f3-117165a3b966","Type":"ContainerStarted","Data":"1b978b17c9131241d0402a72e81665fcbf1d1d9c4e5b10e011c844df37b3dd8a"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.453919 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.458048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" event={"ID":"3961ac22-8919-4b7a-8b44-64c1c5d9e1be","Type":"ContainerStarted","Data":"703be055842aec4a4f95937f492fd9d5435d9bdf93fa2bb760916c3198805ef1"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.458169 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.458731 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" podStartSLOduration=3.42504836 podStartE2EDuration="18.458714814s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.181453766 +0000 UTC m=+1074.153735819" lastFinishedPulling="2026-03-14 09:15:36.21512022 +0000 UTC m=+1089.187402273" observedRunningTime="2026-03-14 09:15:37.45370596 +0000 UTC m=+1090.425988013" watchObservedRunningTime="2026-03-14 09:15:37.458714814 +0000 UTC m=+1090.430996867" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.465113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" event={"ID":"779acd04-3c3b-4b59-8a41-b54250cfb2cb","Type":"ContainerStarted","Data":"ffcb381471bd5a27c57be50be7e86d56f38ee56faac5686d0c5e0b1e2a64ac61"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.470250 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.478761 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" podStartSLOduration=3.852438493 podStartE2EDuration="18.478718816s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.593212375 +0000 UTC m=+1074.565494428" lastFinishedPulling="2026-03-14 09:15:36.219492698 +0000 UTC m=+1089.191774751" observedRunningTime="2026-03-14 09:15:37.472406211 +0000 UTC m=+1090.444688274" watchObservedRunningTime="2026-03-14 09:15:37.478718816 +0000 UTC m=+1090.451000869" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.481450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" event={"ID":"68b90df0-f51f-4365-b2e0-96731de5afe3","Type":"ContainerStarted","Data":"3dc1bc402c39aadcaa9f3e471e6a24531be464fe4fd139a41ac3800f1eb9ec69"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.482210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.485194 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" event={"ID":"3ea49362-1a35-4a1d-8bc4-1a34041ef967","Type":"ContainerStarted","Data":"ced6eea0fd0645e5e9be57df27c003008f96e1956ebd5b19e8e7d4a84b748317"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.485328 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.491825 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.491868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" event={"ID":"848518af-f0df-41f4-b0b6-e38b2e1df95b","Type":"ContainerStarted","Data":"26fb1c44e8edd030d32612e1596e8802ed83fb90c25025caabe8cc9cc5b34f99"} Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.498779 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" podStartSLOduration=3.232875577 podStartE2EDuration="18.498757399s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:20.950680224 +0000 UTC m=+1073.922962277" lastFinishedPulling="2026-03-14 09:15:36.216562046 +0000 UTC m=+1089.188844099" observedRunningTime="2026-03-14 09:15:37.491079221 +0000 UTC m=+1090.463361284" watchObservedRunningTime="2026-03-14 09:15:37.498757399 +0000 UTC m=+1090.471039452" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.526074 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" podStartSLOduration=3.480654389 podStartE2EDuration="18.526052492s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.187094165 +0000 UTC m=+1074.159376218" lastFinishedPulling="2026-03-14 09:15:36.232492268 +0000 UTC m=+1089.204774321" observedRunningTime="2026-03-14 09:15:37.522261188 +0000 UTC m=+1090.494543241" watchObservedRunningTime="2026-03-14 09:15:37.526052492 +0000 UTC m=+1090.498334555" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.557697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" podStartSLOduration=3.751815927 podStartE2EDuration="18.557680051s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.411087011 +0000 UTC m=+1074.383369064" lastFinishedPulling="2026-03-14 09:15:36.216951125 +0000 UTC m=+1089.189233188" observedRunningTime="2026-03-14 09:15:37.551823156 +0000 UTC m=+1090.524105209" watchObservedRunningTime="2026-03-14 09:15:37.557680051 +0000 UTC m=+1090.529962104" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.574414 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" podStartSLOduration=3.773766766 podStartE2EDuration="18.574398872s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.415245083 +0000 UTC m=+1074.387527136" lastFinishedPulling="2026-03-14 09:15:36.215877188 +0000 UTC m=+1089.188159242" observedRunningTime="2026-03-14 09:15:37.568787184 +0000 UTC m=+1090.541069247" watchObservedRunningTime="2026-03-14 09:15:37.574398872 +0000 UTC m=+1090.546680925" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.589501 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" podStartSLOduration=2.999634973 podStartE2EDuration="18.589483793s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:20.617589911 +0000 UTC m=+1073.589871964" lastFinishedPulling="2026-03-14 09:15:36.207438731 +0000 UTC m=+1089.179720784" observedRunningTime="2026-03-14 09:15:37.588722515 +0000 UTC m=+1090.561004588" watchObservedRunningTime="2026-03-14 09:15:37.589483793 +0000 UTC m=+1090.561765836" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.635526 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" podStartSLOduration=3.826152167 podStartE2EDuration="18.635494357s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.411109471 +0000 UTC m=+1074.383391524" lastFinishedPulling="2026-03-14 09:15:36.220451661 +0000 UTC m=+1089.192733714" observedRunningTime="2026-03-14 09:15:37.635267561 +0000 UTC m=+1090.607549624" watchObservedRunningTime="2026-03-14 09:15:37.635494357 +0000 UTC m=+1090.607776410" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.637711 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" podStartSLOduration=3.7969313959999997 podStartE2EDuration="18.637704621s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.374194062 +0000 UTC m=+1074.346476115" lastFinishedPulling="2026-03-14 09:15:36.214967297 +0000 UTC m=+1089.187249340" observedRunningTime="2026-03-14 09:15:37.610672885 +0000 UTC m=+1090.582954938" watchObservedRunningTime="2026-03-14 09:15:37.637704621 +0000 UTC m=+1090.609986674" Mar 14 09:15:37 crc kubenswrapper[4869]: I0314 09:15:37.654235 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" podStartSLOduration=3.983675815 podStartE2EDuration="18.654218347s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.589768961 +0000 UTC m=+1074.562051004" lastFinishedPulling="2026-03-14 09:15:36.260311483 +0000 UTC m=+1089.232593536" observedRunningTime="2026-03-14 09:15:37.649591974 +0000 UTC m=+1090.621874057" watchObservedRunningTime="2026-03-14 09:15:37.654218347 +0000 UTC m=+1090.626500400" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.538735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" event={"ID":"9ab0ae56-f1a8-473a-894f-00af6c8d174b","Type":"ContainerStarted","Data":"9504e91b123f4270aa405540c5ff3ae61f14eb8297c0a7291241a072c7770e05"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.539745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.543482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" event={"ID":"3b368982-02ed-44bb-bba7-9e707d2e4fbf","Type":"ContainerStarted","Data":"21cf1f1e743a0eac74dace63b3c7c7da5070a67aa7ba9edcc5624e25d808dc1f"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.543756 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.545269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" event={"ID":"12ffac0c-6749-4576-8bdf-f2eb432a6373","Type":"ContainerStarted","Data":"b83cb38a17d6e0285b1aed5a1ea88835f3655b2315d70c50aea845f0e1b2bfa4"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.545880 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.548238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" event={"ID":"2f14a802-394d-4f62-a2aa-f5a2595c520e","Type":"ContainerStarted","Data":"1f8ab52dd65dafb14a5ffeeb37e66c663c4ab9d3a0e7d6aa14690fe4eb65f5b4"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.548453 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.550630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" event={"ID":"ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c","Type":"ContainerStarted","Data":"e0445f68514f5b7bb8be1e87807fed329f71bc70be549fd9dea68e373c3bb65e"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.550821 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.552285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" event={"ID":"451a50a4-ee48-4f61-9c05-514ce3897ffa","Type":"ContainerStarted","Data":"14b077ec2613721bcd9bf974e2c4821959d60e4303d3e2eeb5312d27259802d3"} Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.552414 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.568735 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" podStartSLOduration=3.291258894 podStartE2EDuration="23.568717586s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.653298325 +0000 UTC m=+1074.625580378" lastFinishedPulling="2026-03-14 09:15:41.930757017 +0000 UTC m=+1094.903039070" observedRunningTime="2026-03-14 09:15:42.563216102 +0000 UTC m=+1095.535498175" watchObservedRunningTime="2026-03-14 09:15:42.568717586 +0000 UTC m=+1095.540999659" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.593365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" podStartSLOduration=3.270727879 podStartE2EDuration="23.593345493s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.608118403 +0000 UTC m=+1074.580400456" lastFinishedPulling="2026-03-14 09:15:41.930736017 +0000 UTC m=+1094.903018070" observedRunningTime="2026-03-14 09:15:42.586884064 +0000 UTC m=+1095.559166117" watchObservedRunningTime="2026-03-14 09:15:42.593345493 +0000 UTC m=+1095.565627556" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.625955 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" podStartSLOduration=3.343377399 podStartE2EDuration="23.625938886s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.648540458 +0000 UTC m=+1074.620822501" lastFinishedPulling="2026-03-14 09:15:41.931101935 +0000 UTC m=+1094.903383988" observedRunningTime="2026-03-14 09:15:42.618075313 +0000 UTC m=+1095.590357366" watchObservedRunningTime="2026-03-14 09:15:42.625938886 +0000 UTC m=+1095.598220939" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.637382 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" podStartSLOduration=3.249699802 podStartE2EDuration="23.637368767s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.610379059 +0000 UTC m=+1074.582661112" lastFinishedPulling="2026-03-14 09:15:41.998048024 +0000 UTC m=+1094.970330077" observedRunningTime="2026-03-14 09:15:42.633727768 +0000 UTC m=+1095.606009821" watchObservedRunningTime="2026-03-14 09:15:42.637368767 +0000 UTC m=+1095.609650820" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.659864 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" podStartSLOduration=3.271036718 podStartE2EDuration="23.659848231s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.602140696 +0000 UTC m=+1074.574422749" lastFinishedPulling="2026-03-14 09:15:41.990952209 +0000 UTC m=+1094.963234262" observedRunningTime="2026-03-14 09:15:42.654861438 +0000 UTC m=+1095.627143501" watchObservedRunningTime="2026-03-14 09:15:42.659848231 +0000 UTC m=+1095.632130284" Mar 14 09:15:42 crc kubenswrapper[4869]: I0314 09:15:42.675242 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" podStartSLOduration=3.3466826 podStartE2EDuration="23.6752281s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.602191467 +0000 UTC m=+1074.574473510" lastFinishedPulling="2026-03-14 09:15:41.930736957 +0000 UTC m=+1094.903019010" observedRunningTime="2026-03-14 09:15:42.672326518 +0000 UTC m=+1095.644608571" watchObservedRunningTime="2026-03-14 09:15:42.6752281 +0000 UTC m=+1095.647510153" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.449453 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-cb6d66846-g9rf5" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.507994 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-74d565fbd5-c5g8t" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.586997 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d6bd468b-nwggm" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.681898 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-bf6b7fd8c-q966w" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.780639 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9c8c85cd7-5xpwd" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.818833 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-68f8d496f8-zj8hh" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.884326 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6f6f57b9b6-hd7c8" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.903422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-744456f686-bz5rc" Mar 14 09:15:49 crc kubenswrapper[4869]: I0314 09:15:49.908694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9475cdd7-hb4t9" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.065905 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-645c9f6488-p4vnd" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.149801 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7cf9f49d6-6pr99" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.204925 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-848d74f969-xt747" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.342192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-b5c469fd-2hff7" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.473819 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7f7469dbc6-msr64" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.503301 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6646df7cdb-7lbq5" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.523886 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cc8dbcb54-9rqrs" Mar 14 09:15:50 crc kubenswrapper[4869]: I0314 09:15:50.561159 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gplwr" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.351209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.366626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0c504b4-c098-4ce0-930e-289770c5113f-cert\") pod \"infra-operator-controller-manager-fbfb5bd65-ncnch\" (UID: \"a0c504b4-c098-4ce0-930e-289770c5113f\") " pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.433391 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-r6jnw" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.441868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:15:51 crc kubenswrapper[4869]: W0314 09:15:51.711119 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c504b4_c098_4ce0_930e_289770c5113f.slice/crio-a1981dc3c761588f7624ae8fda930963fd0ffe77efdb09fa10f927bd2ec3f3a5 WatchSource:0}: Error finding container a1981dc3c761588f7624ae8fda930963fd0ffe77efdb09fa10f927bd2ec3f3a5: Status 404 returned error can't find the container with id a1981dc3c761588f7624ae8fda930963fd0ffe77efdb09fa10f927bd2ec3f3a5 Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.720558 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch"] Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.763396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.772102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4afcee0e-ed99-4df2-b68d-ba86e8dedacc-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5\" (UID: \"4afcee0e-ed99-4df2-b68d-ba86e8dedacc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.984310 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-74jts" Mar 14 09:15:51 crc kubenswrapper[4869]: I0314 09:15:51.991984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.169172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.169698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.173541 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-webhook-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.174443 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a5b98d8-17c9-4d94-a61a-2c500a234d2e-metrics-certs\") pod \"openstack-operator-controller-manager-59b5586c67-f56l9\" (UID: \"4a5b98d8-17c9-4d94-a61a-2c500a234d2e\") " pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.381057 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tdx2p" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.388618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.479031 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5"] Mar 14 09:15:52 crc kubenswrapper[4869]: W0314 09:15:52.485856 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4afcee0e_ed99_4df2_b68d_ba86e8dedacc.slice/crio-42018b92ccf06646a37f0403ba75f99b121f078801e56a26c560214cf5e7c056 WatchSource:0}: Error finding container 42018b92ccf06646a37f0403ba75f99b121f078801e56a26c560214cf5e7c056: Status 404 returned error can't find the container with id 42018b92ccf06646a37f0403ba75f99b121f078801e56a26c560214cf5e7c056 Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.651271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" event={"ID":"a0c504b4-c098-4ce0-930e-289770c5113f","Type":"ContainerStarted","Data":"a1981dc3c761588f7624ae8fda930963fd0ffe77efdb09fa10f927bd2ec3f3a5"} Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.652151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" event={"ID":"4afcee0e-ed99-4df2-b68d-ba86e8dedacc","Type":"ContainerStarted","Data":"42018b92ccf06646a37f0403ba75f99b121f078801e56a26c560214cf5e7c056"} Mar 14 09:15:52 crc kubenswrapper[4869]: I0314 09:15:52.866642 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9"] Mar 14 09:15:53 crc kubenswrapper[4869]: I0314 09:15:53.663495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" event={"ID":"4a5b98d8-17c9-4d94-a61a-2c500a234d2e","Type":"ContainerStarted","Data":"180d55107bdb74867829acf6392563cf5c7389b0697b801126ff95fac4e720fb"} Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.151808 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557996-f24j7"] Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.153164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.155770 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.157003 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.157005 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.198652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zng5h\" (UniqueName: \"kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h\") pod \"auto-csr-approver-29557996-f24j7\" (UID: \"acf7591e-7c0f-48eb-a174-b926b51c75a5\") " pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.202546 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-f24j7"] Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.300770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zng5h\" (UniqueName: \"kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h\") pod \"auto-csr-approver-29557996-f24j7\" (UID: \"acf7591e-7c0f-48eb-a174-b926b51c75a5\") " pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.324076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zng5h\" (UniqueName: \"kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h\") pod \"auto-csr-approver-29557996-f24j7\" (UID: \"acf7591e-7c0f-48eb-a174-b926b51c75a5\") " pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.495914 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.725871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" event={"ID":"4a5b98d8-17c9-4d94-a61a-2c500a234d2e","Type":"ContainerStarted","Data":"37aac739cdba6d3c89ad2e9fb316e05d9c623937b3f038e2586e0a2e8da19f6d"} Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.726097 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:16:00 crc kubenswrapper[4869]: I0314 09:16:00.766734 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" podStartSLOduration=41.766706542 podStartE2EDuration="41.766706542s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:16:00.760267973 +0000 UTC m=+1113.732550026" watchObservedRunningTime="2026-03-14 09:16:00.766706542 +0000 UTC m=+1113.738988605" Mar 14 09:16:10 crc kubenswrapper[4869]: E0314 09:16:10.787241 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1084671554/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:d8f38654cb385d3ff582419746c3d68d64c43cea412622f0e5dfcb32ee5ab47b" Mar 14 09:16:10 crc kubenswrapper[4869]: E0314 09:16:10.788463 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:d8f38654cb385d3ff582419746c3d68d64c43cea412622f0e5dfcb32ee5ab47b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:18.0-fr5-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:18.0-fr5-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:18.0-fr5-latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8sm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5_openstack-operators(4afcee0e-ed99-4df2-b68d-ba86e8dedacc): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1084671554/1\": happened during read: context canceled" logger="UnhandledError" Mar 14 09:16:10 crc kubenswrapper[4869]: E0314 09:16:10.789967 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1084671554/1\\\": happened during read: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" podUID="4afcee0e-ed99-4df2-b68d-ba86e8dedacc" Mar 14 09:16:10 crc kubenswrapper[4869]: E0314 09:16:10.802973 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:d8f38654cb385d3ff582419746c3d68d64c43cea412622f0e5dfcb32ee5ab47b\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" podUID="4afcee0e-ed99-4df2-b68d-ba86e8dedacc" Mar 14 09:16:11 crc kubenswrapper[4869]: E0314 09:16:11.592050 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:44f4955b9906f1d5905790c538b39cd7e1bfc15406541341479191f87d1e5b4d" Mar 14 09:16:11 crc kubenswrapper[4869]: E0314 09:16:11.592704 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:44f4955b9906f1d5905790c538b39cd7e1bfc15406541341479191f87d1e5b4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxm74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-fbfb5bd65-ncnch_openstack-operators(a0c504b4-c098-4ce0-930e-289770c5113f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:16:11 crc kubenswrapper[4869]: E0314 09:16:11.594703 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" podUID="a0c504b4-c098-4ce0-930e-289770c5113f" Mar 14 09:16:11 crc kubenswrapper[4869]: E0314 09:16:11.811080 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:44f4955b9906f1d5905790c538b39cd7e1bfc15406541341479191f87d1e5b4d\\\"\"" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" podUID="a0c504b4-c098-4ce0-930e-289770c5113f" Mar 14 09:16:11 crc kubenswrapper[4869]: I0314 09:16:11.863669 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-f24j7"] Mar 14 09:16:11 crc kubenswrapper[4869]: W0314 09:16:11.874818 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacf7591e_7c0f_48eb_a174_b926b51c75a5.slice/crio-d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8 WatchSource:0}: Error finding container d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8: Status 404 returned error can't find the container with id d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8 Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.400721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59b5586c67-f56l9" Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.822072 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" event={"ID":"650e636f-cd1b-4f5b-814d-076980bd8141","Type":"ContainerStarted","Data":"42ca8933df05d0196b19b6b381b5ee6efc46958c12516521dcd7a94fced055a5"} Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.822295 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.824138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" event={"ID":"d521dfe5-1037-4df9-a34b-5996da959160","Type":"ContainerStarted","Data":"81c0040fd312f1fe722c1e9fe7c445bfc5e7f412478cfa43180380db659b4247"} Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.824352 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.826912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-f24j7" event={"ID":"acf7591e-7c0f-48eb-a174-b926b51c75a5","Type":"ContainerStarted","Data":"d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8"} Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.830561 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" event={"ID":"e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52","Type":"ContainerStarted","Data":"a3d1e03e71d1ab60d895bffcaf59d1ad911b2ea5e82fd95818366d1a2c353b31"} Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.840340 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" podStartSLOduration=3.219870806 podStartE2EDuration="53.840289151s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:20.96231524 +0000 UTC m=+1073.934597293" lastFinishedPulling="2026-03-14 09:16:11.582733555 +0000 UTC m=+1124.555015638" observedRunningTime="2026-03-14 09:16:12.839974103 +0000 UTC m=+1125.812256166" watchObservedRunningTime="2026-03-14 09:16:12.840289151 +0000 UTC m=+1125.812571204" Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.858726 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" podStartSLOduration=3.686043774 podStartE2EDuration="53.858706593s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.409973774 +0000 UTC m=+1074.382255827" lastFinishedPulling="2026-03-14 09:16:11.582636553 +0000 UTC m=+1124.554918646" observedRunningTime="2026-03-14 09:16:12.851479996 +0000 UTC m=+1125.823762059" watchObservedRunningTime="2026-03-14 09:16:12.858706593 +0000 UTC m=+1125.830988646" Mar 14 09:16:12 crc kubenswrapper[4869]: I0314 09:16:12.872265 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zjmd5" podStartSLOduration=2.735258526 podStartE2EDuration="52.872252706s" podCreationTimestamp="2026-03-14 09:15:20 +0000 UTC" firstStartedPulling="2026-03-14 09:15:21.448605675 +0000 UTC m=+1074.420887738" lastFinishedPulling="2026-03-14 09:16:11.585599825 +0000 UTC m=+1124.557881918" observedRunningTime="2026-03-14 09:16:12.870063872 +0000 UTC m=+1125.842345925" watchObservedRunningTime="2026-03-14 09:16:12.872252706 +0000 UTC m=+1125.844534759" Mar 14 09:16:13 crc kubenswrapper[4869]: I0314 09:16:13.840354 4869 generic.go:334] "Generic (PLEG): container finished" podID="acf7591e-7c0f-48eb-a174-b926b51c75a5" containerID="6609ae58ffeb67243086dca76b9cb01312dbbf3ad47af6125fa3c4518555ba04" exitCode=0 Mar 14 09:16:13 crc kubenswrapper[4869]: I0314 09:16:13.840429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-f24j7" event={"ID":"acf7591e-7c0f-48eb-a174-b926b51c75a5","Type":"ContainerDied","Data":"6609ae58ffeb67243086dca76b9cb01312dbbf3ad47af6125fa3c4518555ba04"} Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.074876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.238198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zng5h\" (UniqueName: \"kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h\") pod \"acf7591e-7c0f-48eb-a174-b926b51c75a5\" (UID: \"acf7591e-7c0f-48eb-a174-b926b51c75a5\") " Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.244088 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h" (OuterVolumeSpecName: "kube-api-access-zng5h") pod "acf7591e-7c0f-48eb-a174-b926b51c75a5" (UID: "acf7591e-7c0f-48eb-a174-b926b51c75a5"). InnerVolumeSpecName "kube-api-access-zng5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.340161 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zng5h\" (UniqueName: \"kubernetes.io/projected/acf7591e-7c0f-48eb-a174-b926b51c75a5-kube-api-access-zng5h\") on node \"crc\" DevicePath \"\"" Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.863705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557996-f24j7" event={"ID":"acf7591e-7c0f-48eb-a174-b926b51c75a5","Type":"ContainerDied","Data":"d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8"} Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.863768 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c12804906be3120e11b8e0e3642a45842b9b7696a9987b9601107099a2abc8" Mar 14 09:16:15 crc kubenswrapper[4869]: I0314 09:16:15.863773 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557996-f24j7" Mar 14 09:16:16 crc kubenswrapper[4869]: I0314 09:16:16.141412 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-d4fln"] Mar 14 09:16:16 crc kubenswrapper[4869]: I0314 09:16:16.147548 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557990-d4fln"] Mar 14 09:16:17 crc kubenswrapper[4869]: I0314 09:16:17.714831 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a61b7e-10da-4b46-9d85-4833360660fe" path="/var/lib/kubelet/pods/27a61b7e-10da-4b46-9d85-4833360660fe/volumes" Mar 14 09:16:19 crc kubenswrapper[4869]: I0314 09:16:19.441101 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64768694d-fjdmg" Mar 14 09:16:20 crc kubenswrapper[4869]: I0314 09:16:20.113735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-58ff56fcc7-n9qfr" Mar 14 09:16:27 crc kubenswrapper[4869]: I0314 09:16:27.954214 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" event={"ID":"a0c504b4-c098-4ce0-930e-289770c5113f","Type":"ContainerStarted","Data":"c251ebed1a3031df50ce5fc7bccec20eda54448778e2df84760f66de3605ce45"} Mar 14 09:16:27 crc kubenswrapper[4869]: I0314 09:16:27.955147 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:16:27 crc kubenswrapper[4869]: I0314 09:16:27.974851 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" podStartSLOduration=33.095838136 podStartE2EDuration="1m8.974828271s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:51.714171883 +0000 UTC m=+1104.686453946" lastFinishedPulling="2026-03-14 09:16:27.593162018 +0000 UTC m=+1140.565444081" observedRunningTime="2026-03-14 09:16:27.967312106 +0000 UTC m=+1140.939594159" watchObservedRunningTime="2026-03-14 09:16:27.974828271 +0000 UTC m=+1140.947110324" Mar 14 09:16:28 crc kubenswrapper[4869]: I0314 09:16:28.963167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" event={"ID":"4afcee0e-ed99-4df2-b68d-ba86e8dedacc","Type":"ContainerStarted","Data":"1ec59fb9c37f374a803d3dc116e249d8c328a449ab686519744aa1f4120f3c44"} Mar 14 09:16:28 crc kubenswrapper[4869]: I0314 09:16:28.963825 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:16:28 crc kubenswrapper[4869]: I0314 09:16:28.993396 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" podStartSLOduration=34.152893754 podStartE2EDuration="1m9.993381711s" podCreationTimestamp="2026-03-14 09:15:19 +0000 UTC" firstStartedPulling="2026-03-14 09:15:52.487639089 +0000 UTC m=+1105.459921142" lastFinishedPulling="2026-03-14 09:16:28.328127046 +0000 UTC m=+1141.300409099" observedRunningTime="2026-03-14 09:16:28.990689985 +0000 UTC m=+1141.962972048" watchObservedRunningTime="2026-03-14 09:16:28.993381711 +0000 UTC m=+1141.965663764" Mar 14 09:16:39 crc kubenswrapper[4869]: I0314 09:16:39.604892 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:16:39 crc kubenswrapper[4869]: I0314 09:16:39.605734 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:16:41 crc kubenswrapper[4869]: I0314 09:16:41.449157 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-fbfb5bd65-ncnch" Mar 14 09:16:41 crc kubenswrapper[4869]: I0314 09:16:41.998911 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5" Mar 14 09:17:00 crc kubenswrapper[4869]: I0314 09:17:00.540721 4869 scope.go:117] "RemoveContainer" containerID="3fedee07a079e936f700f4e70f51bf828fce1af58c5fa32aa6f1e372d425dcdd" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.430148 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:01 crc kubenswrapper[4869]: E0314 09:17:01.431431 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf7591e-7c0f-48eb-a174-b926b51c75a5" containerName="oc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.431447 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf7591e-7c0f-48eb-a174-b926b51c75a5" containerName="oc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.434970 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf7591e-7c0f-48eb-a174-b926b51c75a5" containerName="oc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.435866 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.438407 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.439792 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.440104 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.440298 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.440443 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-9qc4g" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.455083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.455475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2sxz\" (UniqueName: \"kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.531499 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.532939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.535826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.553350 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.560774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.560834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wcp\" (UniqueName: \"kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.560877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.560904 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2sxz\" (UniqueName: \"kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.561032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.561599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.609544 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2sxz\" (UniqueName: \"kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz\") pod \"dnsmasq-dns-65d4fd8749-f7ft9\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.661787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wcp\" (UniqueName: \"kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.661836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.661880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.662711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.662920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.683184 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wcp\" (UniqueName: \"kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp\") pod \"dnsmasq-dns-864dcb4fb5-9qqrc\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.758835 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:01 crc kubenswrapper[4869]: I0314 09:17:01.849788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:02 crc kubenswrapper[4869]: I0314 09:17:02.204294 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:02 crc kubenswrapper[4869]: I0314 09:17:02.252476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" event={"ID":"6f7a929f-378d-41fd-8dfb-b332c9db0f7e","Type":"ContainerStarted","Data":"52528111bc71a960ee87a2b42c87c7642e9cdfa5a0064dee959fe60968628075"} Mar 14 09:17:02 crc kubenswrapper[4869]: I0314 09:17:02.325681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:02 crc kubenswrapper[4869]: W0314 09:17:02.345291 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba487a08_578b_4a02_a8c2_3aea942b953a.slice/crio-e9b127cb1a21a781707177e38832dd03d8ebb2e0ab32e6e46f9b2e357c800e57 WatchSource:0}: Error finding container e9b127cb1a21a781707177e38832dd03d8ebb2e0ab32e6e46f9b2e357c800e57: Status 404 returned error can't find the container with id e9b127cb1a21a781707177e38832dd03d8ebb2e0ab32e6e46f9b2e357c800e57 Mar 14 09:17:03 crc kubenswrapper[4869]: I0314 09:17:03.261781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" event={"ID":"ba487a08-578b-4a02-a8c2-3aea942b953a","Type":"ContainerStarted","Data":"e9b127cb1a21a781707177e38832dd03d8ebb2e0ab32e6e46f9b2e357c800e57"} Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.214955 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.246217 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.247397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.259122 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.418893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9rv\" (UniqueName: \"kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.418963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.419127 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.484462 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.509303 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.510490 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.519268 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fw6\" (UniqueName: \"kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9rv\" (UniqueName: \"kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520909 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.520996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.522164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.522887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.556229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9rv\" (UniqueName: \"kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv\") pod \"dnsmasq-dns-675cdbc945-kvp74\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.566864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.621350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fw6\" (UniqueName: \"kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.621392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.621427 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.622225 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.622981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.643447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fw6\" (UniqueName: \"kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6\") pod \"dnsmasq-dns-7f7c99478f-pn2sb\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.796063 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.824591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.826349 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.843315 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.867124 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.926682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5hx\" (UniqueName: \"kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.926769 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:05 crc kubenswrapper[4869]: I0314 09:17:05.926810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.030456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5hx\" (UniqueName: \"kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.030535 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.030583 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.031886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.032324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.055385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5hx\" (UniqueName: \"kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx\") pod \"dnsmasq-dns-78c9969dd5-5h7jl\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.125876 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.175943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.440855 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.442150 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.443905 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.444120 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.444263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.447266 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.447323 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.447395 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.447766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g2lgj" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.465037 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.640365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.640938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.640976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-config-data\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641024 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38c3b4a0-0639-4d3b-ae4f-3e272522326f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641479 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38c3b4a0-0639-4d3b-ae4f-3e272522326f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.641751 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhpq\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-kube-api-access-4bhpq\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.659000 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.660712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.663298 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.664022 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.664278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s9998" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.664606 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.664901 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.665126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.665934 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.683382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhpq\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-kube-api-access-4bhpq\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-config-data\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743767 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743804 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38c3b4a0-0639-4d3b-ae4f-3e272522326f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38c3b4a0-0639-4d3b-ae4f-3e272522326f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.743986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.744007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.744469 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.745260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.745781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.745834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.745889 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.746324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38c3b4a0-0639-4d3b-ae4f-3e272522326f-config-data\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.749264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/38c3b4a0-0639-4d3b-ae4f-3e272522326f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.749327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.760006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/38c3b4a0-0639-4d3b-ae4f-3e272522326f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.760547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.763049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhpq\" (UniqueName: \"kubernetes.io/projected/38c3b4a0-0639-4d3b-ae4f-3e272522326f-kube-api-access-4bhpq\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.777558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"38c3b4a0-0639-4d3b-ae4f-3e272522326f\") " pod="openstack/rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.845835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.845914 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.845944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9735b30c-8379-4478-9460-51882d519d32-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.845983 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvqs\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-kube-api-access-7dvqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9735b30c-8379-4478-9460-51882d519d32-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.846238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.937339 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.938653 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.941118 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.942414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.942775 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.943200 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.943775 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.943731 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-hqltf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.944109 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9735b30c-8379-4478-9460-51882d519d32-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvqs\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-kube-api-access-7dvqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947924 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9735b30c-8379-4478-9460-51882d519d32-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.947970 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.953459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9735b30c-8379-4478-9460-51882d519d32-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.953627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.953726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.954128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.954186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.954231 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.954579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.955684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9735b30c-8379-4478-9460-51882d519d32-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.957007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9735b30c-8379-4478-9460-51882d519d32-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.962225 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.977204 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvqs\" (UniqueName: \"kubernetes.io/projected/9735b30c-8379-4478-9460-51882d519d32-kube-api-access-7dvqs\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.978689 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:17:06 crc kubenswrapper[4869]: I0314 09:17:06.989846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9735b30c-8379-4478-9460-51882d519d32\") " pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.033644 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs94\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-kube-api-access-sxs94\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049807 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da13efd4-046a-4059-9b04-b731f2d164b5-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.049924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.050034 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.050086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.050131 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da13efd4-046a-4059-9b04-b731f2d164b5-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.050348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.071008 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153664 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da13efd4-046a-4059-9b04-b731f2d164b5-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153813 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da13efd4-046a-4059-9b04-b731f2d164b5-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxs94\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-kube-api-access-sxs94\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.153986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.154803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.156200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.156346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.156422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.156784 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.157081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da13efd4-046a-4059-9b04-b731f2d164b5-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.159068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.159767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da13efd4-046a-4059-9b04-b731f2d164b5-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.163230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.173789 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da13efd4-046a-4059-9b04-b731f2d164b5-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.178220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxs94\" (UniqueName: \"kubernetes.io/projected/da13efd4-046a-4059-9b04-b731f2d164b5-kube-api-access-sxs94\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.187663 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"da13efd4-046a-4059-9b04-b731f2d164b5\") " pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:07 crc kubenswrapper[4869]: I0314 09:17:07.328775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.409935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.411882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.419774 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.423105 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x9xpr" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.423369 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.426805 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.427147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.434823 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgsc\" (UniqueName: \"kubernetes.io/projected/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kube-api-access-lbgsc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.474784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.575982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgsc\" (UniqueName: \"kubernetes.io/projected/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kube-api-access-lbgsc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576089 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.576185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.577006 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.577356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-default\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.577935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.578581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kolla-config\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.579642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.587401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.591924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.597253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgsc\" (UniqueName: \"kubernetes.io/projected/d42f4faa-b0db-40b7-acd5-c89f1eaf19ff-kube-api-access-lbgsc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.620881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff\") " pod="openstack/openstack-galera-0" Mar 14 09:17:08 crc kubenswrapper[4869]: I0314 09:17:08.739526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.607430 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.607705 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.846129 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.848063 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.853821 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8crtn" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.854321 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.854463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.854791 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.855387 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.895699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897325 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.897520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkgq\" (UniqueName: \"kubernetes.io/projected/2b16088c-48ba-4c09-91b1-a0447bced81b-kube-api-access-lzkgq\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.998885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.998959 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999021 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkgq\" (UniqueName: \"kubernetes.io/projected/2b16088c-48ba-4c09-91b1-a0447bced81b-kube-api-access-lzkgq\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:09 crc kubenswrapper[4869]: I0314 09:17:09.999172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:09.999684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:09.999922 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:09.999925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.000055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.000839 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b16088c-48ba-4c09-91b1-a0447bced81b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.004718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.021639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b16088c-48ba-4c09-91b1-a0447bced81b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.024722 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkgq\" (UniqueName: \"kubernetes.io/projected/2b16088c-48ba-4c09-91b1-a0447bced81b-kube-api-access-lzkgq\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.034734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2b16088c-48ba-4c09-91b1-a0447bced81b\") " pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.183789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.198198 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.199384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.202776 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p5cf6" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.202819 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.203157 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.215213 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.303311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jsg\" (UniqueName: \"kubernetes.io/projected/4f89c32b-b055-4d5e-aa56-a5f41553707c-kube-api-access-89jsg\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.303400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-config-data\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.303423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.303437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-kolla-config\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.303459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.404752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89jsg\" (UniqueName: \"kubernetes.io/projected/4f89c32b-b055-4d5e-aa56-a5f41553707c-kube-api-access-89jsg\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.404861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-config-data\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.404898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-kolla-config\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.404928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.404966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.406088 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-kolla-config\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.406124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f89c32b-b055-4d5e-aa56-a5f41553707c-config-data\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.410493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.410640 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f89c32b-b055-4d5e-aa56-a5f41553707c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.423494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89jsg\" (UniqueName: \"kubernetes.io/projected/4f89c32b-b055-4d5e-aa56-a5f41553707c-kube-api-access-89jsg\") pod \"memcached-0\" (UID: \"4f89c32b-b055-4d5e-aa56-a5f41553707c\") " pod="openstack/memcached-0" Mar 14 09:17:10 crc kubenswrapper[4869]: I0314 09:17:10.518372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 14 09:17:11 crc kubenswrapper[4869]: I0314 09:17:11.353482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" event={"ID":"b99674e2-9283-4ba9-bf6a-cdbfd8763ada","Type":"ContainerStarted","Data":"8aec2c7c409b1b20cdf737e5b40e7219151ca99cca8318f1d015e0260ed13b59"} Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.605788 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.606921 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.611654 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bf7g7" Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.622074 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.652131 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2c5b\" (UniqueName: \"kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b\") pod \"kube-state-metrics-0\" (UID: \"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd\") " pod="openstack/kube-state-metrics-0" Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.754758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2c5b\" (UniqueName: \"kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b\") pod \"kube-state-metrics-0\" (UID: \"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd\") " pod="openstack/kube-state-metrics-0" Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.782605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2c5b\" (UniqueName: \"kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b\") pod \"kube-state-metrics-0\" (UID: \"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd\") " pod="openstack/kube-state-metrics-0" Mar 14 09:17:12 crc kubenswrapper[4869]: I0314 09:17:12.928523 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.955136 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.957251 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.963759 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.964932 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.964978 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.964998 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.965141 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.965287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-kdbf4" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.965495 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.980947 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:13 crc kubenswrapper[4869]: I0314 09:17:13.984962 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.091930 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.092345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.092581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.093476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.093523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.093574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nzc\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.093960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.094094 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.094121 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.094245 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196268 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8nzc\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.196431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.197139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.197280 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.197820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.204035 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.204068 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b23c7cf67fc615d04f0e059180fc33c3eccc3627e9974587af79149c424358e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.205308 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.205565 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.207695 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.207866 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.214727 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.215174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8nzc\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.255436 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.274163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:17:14 crc kubenswrapper[4869]: I0314 09:17:14.384797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.862483 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vznj2"] Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.863840 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.869421 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.869590 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hj8vg" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.869696 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.872355 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vznj2"] Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.903355 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rllnb"] Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.905038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932037 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688fd\" (UniqueName: \"kubernetes.io/projected/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-kube-api-access-688fd\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-lib\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggg6j\" (UniqueName: \"kubernetes.io/projected/e8735cd0-7d17-4b28-b5fb-99219798ee6f-kube-api-access-ggg6j\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-log\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8735cd0-7d17-4b28-b5fb-99219798ee6f-scripts\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-ovn-controller-tls-certs\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-combined-ca-bundle\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932298 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-etc-ovs\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-run\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932337 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-scripts\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-log-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:15 crc kubenswrapper[4869]: I0314 09:17:15.932819 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rllnb"] Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.034883 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-ovn-controller-tls-certs\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.034977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-combined-ca-bundle\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-etc-ovs\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-run\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035394 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-scripts\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035483 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-log-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688fd\" (UniqueName: \"kubernetes.io/projected/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-kube-api-access-688fd\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-lib\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggg6j\" (UniqueName: \"kubernetes.io/projected/e8735cd0-7d17-4b28-b5fb-99219798ee6f-kube-api-access-ggg6j\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-log\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.035898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8735cd0-7d17-4b28-b5fb-99219798ee6f-scripts\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.038712 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-run\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.038873 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-lib\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.038947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.039049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-log-ovn\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.039454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8735cd0-7d17-4b28-b5fb-99219798ee6f-var-run\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.039571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-var-log\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.041416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8735cd0-7d17-4b28-b5fb-99219798ee6f-scripts\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.050415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-scripts\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.053652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-etc-ovs\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.060331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-ovn-controller-tls-certs\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.061003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8735cd0-7d17-4b28-b5fb-99219798ee6f-combined-ca-bundle\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.061435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggg6j\" (UniqueName: \"kubernetes.io/projected/e8735cd0-7d17-4b28-b5fb-99219798ee6f-kube-api-access-ggg6j\") pod \"ovn-controller-vznj2\" (UID: \"e8735cd0-7d17-4b28-b5fb-99219798ee6f\") " pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.064033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688fd\" (UniqueName: \"kubernetes.io/projected/8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2-kube-api-access-688fd\") pod \"ovn-controller-ovs-rllnb\" (UID: \"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2\") " pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.182466 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.220607 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.294058 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.298999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.301479 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.301520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.301581 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.301960 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-52r5h" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.302014 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.302207 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.343914 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.343985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344016 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-config\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8m2\" (UniqueName: \"kubernetes.io/projected/e841cbaa-b100-4321-9b08-f5725aee3408-kube-api-access-pf8m2\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344112 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.344137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-config\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8m2\" (UniqueName: \"kubernetes.io/projected/e841cbaa-b100-4321-9b08-f5725aee3408-kube-api-access-pf8m2\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446579 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446645 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.446926 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.448178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-config\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.448336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e841cbaa-b100-4321-9b08-f5725aee3408-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.448528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.456711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.456725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.467325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e841cbaa-b100-4321-9b08-f5725aee3408-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.471194 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8m2\" (UniqueName: \"kubernetes.io/projected/e841cbaa-b100-4321-9b08-f5725aee3408-kube-api-access-pf8m2\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.480758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e841cbaa-b100-4321-9b08-f5725aee3408\") " pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:16 crc kubenswrapper[4869]: I0314 09:17:16.624906 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.022101 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.023606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.025672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.025757 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.025987 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.027021 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8594m" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.036851 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.213356 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.213424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.213685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.213946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.213990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9cf4\" (UniqueName: \"kubernetes.io/projected/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-kube-api-access-b9cf4\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.214048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.214161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.214222 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315614 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9cf4\" (UniqueName: \"kubernetes.io/projected/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-kube-api-access-b9cf4\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315812 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.315845 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.316552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.316947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-config\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.317377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.317650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.317802 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.325403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.325409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.326874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.343199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.346641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9cf4\" (UniqueName: \"kubernetes.io/projected/9b048a42-637e-49e6-bdfd-ba3d574e5e4b-kube-api-access-b9cf4\") pod \"ovsdbserver-sb-0\" (UID: \"9b048a42-637e-49e6-bdfd-ba3d574e5e4b\") " pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:20 crc kubenswrapper[4869]: I0314 09:17:20.641322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:21 crc kubenswrapper[4869]: W0314 09:17:21.472453 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9735b30c_8379_4478_9460_51882d519d32.slice/crio-feb760ec85a55a9f48a1f5092ed17d51cee65cf11f14d74939c83513a74a38e2 WatchSource:0}: Error finding container feb760ec85a55a9f48a1f5092ed17d51cee65cf11f14d74939c83513a74a38e2: Status 404 returned error can't find the container with id feb760ec85a55a9f48a1f5092ed17d51cee65cf11f14d74939c83513a74a38e2 Mar 14 09:17:21 crc kubenswrapper[4869]: I0314 09:17:21.864786 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.191116 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.191156 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.191287 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2sxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-65d4fd8749-f7ft9_openstack(6f7a929f-378d-41fd-8dfb-b332c9db0f7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.193377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" podUID="6f7a929f-378d-41fd-8dfb-b332c9db0f7e" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.211194 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.211232 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.211320 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.153:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24wcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-864dcb4fb5-9qqrc_openstack(ba487a08-578b-4a02-a8c2-3aea942b953a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:17:22 crc kubenswrapper[4869]: E0314 09:17:22.212449 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" podUID="ba487a08-578b-4a02-a8c2-3aea942b953a" Mar 14 09:17:22 crc kubenswrapper[4869]: I0314 09:17:22.442445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9735b30c-8379-4478-9460-51882d519d32","Type":"ContainerStarted","Data":"feb760ec85a55a9f48a1f5092ed17d51cee65cf11f14d74939c83513a74a38e2"} Mar 14 09:17:22 crc kubenswrapper[4869]: I0314 09:17:22.445187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" event={"ID":"e0310552-638b-4d95-8946-87187f5815a6","Type":"ContainerStarted","Data":"19d4125657fed37183fe706cbe4ddbb5fb1996014c7e0a35a125b6e42f364c7a"} Mar 14 09:17:22 crc kubenswrapper[4869]: I0314 09:17:22.696449 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 14 09:17:22 crc kubenswrapper[4869]: I0314 09:17:22.716538 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.003226 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.019225 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.167432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config\") pod \"ba487a08-578b-4a02-a8c2-3aea942b953a\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.167695 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24wcp\" (UniqueName: \"kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp\") pod \"ba487a08-578b-4a02-a8c2-3aea942b953a\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.167762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config\") pod \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.167796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2sxz\" (UniqueName: \"kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz\") pod \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\" (UID: \"6f7a929f-378d-41fd-8dfb-b332c9db0f7e\") " Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.167829 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc\") pod \"ba487a08-578b-4a02-a8c2-3aea942b953a\" (UID: \"ba487a08-578b-4a02-a8c2-3aea942b953a\") " Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.168232 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config" (OuterVolumeSpecName: "config") pod "6f7a929f-378d-41fd-8dfb-b332c9db0f7e" (UID: "6f7a929f-378d-41fd-8dfb-b332c9db0f7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.168713 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba487a08-578b-4a02-a8c2-3aea942b953a" (UID: "ba487a08-578b-4a02-a8c2-3aea942b953a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.169051 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.169065 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.169064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config" (OuterVolumeSpecName: "config") pod "ba487a08-578b-4a02-a8c2-3aea942b953a" (UID: "ba487a08-578b-4a02-a8c2-3aea942b953a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.176486 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz" (OuterVolumeSpecName: "kube-api-access-g2sxz") pod "6f7a929f-378d-41fd-8dfb-b332c9db0f7e" (UID: "6f7a929f-378d-41fd-8dfb-b332c9db0f7e"). InnerVolumeSpecName "kube-api-access-g2sxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.177712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp" (OuterVolumeSpecName: "kube-api-access-24wcp") pod "ba487a08-578b-4a02-a8c2-3aea942b953a" (UID: "ba487a08-578b-4a02-a8c2-3aea942b953a"). InnerVolumeSpecName "kube-api-access-24wcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.203902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.210593 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.216768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.222683 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.228411 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.233752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.270771 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24wcp\" (UniqueName: \"kubernetes.io/projected/ba487a08-578b-4a02-a8c2-3aea942b953a-kube-api-access-24wcp\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.270812 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2sxz\" (UniqueName: \"kubernetes.io/projected/6f7a929f-378d-41fd-8dfb-b332c9db0f7e-kube-api-access-g2sxz\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.270826 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba487a08-578b-4a02-a8c2-3aea942b953a-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.466913 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" event={"ID":"ba487a08-578b-4a02-a8c2-3aea942b953a","Type":"ContainerDied","Data":"e9b127cb1a21a781707177e38832dd03d8ebb2e0ab32e6e46f9b2e357c800e57"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.466930 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864dcb4fb5-9qqrc" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.468482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f89c32b-b055-4d5e-aa56-a5f41553707c","Type":"ContainerStarted","Data":"6ead8380bec7f7fd713c98240d4da413a76b27d02490a8e3e930e0e9360a83c9"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.470848 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" event={"ID":"6f7a929f-378d-41fd-8dfb-b332c9db0f7e","Type":"ContainerDied","Data":"52528111bc71a960ee87a2b42c87c7642e9cdfa5a0064dee959fe60968628075"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.470931 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65d4fd8749-f7ft9" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.488109 4869 generic.go:334] "Generic (PLEG): container finished" podID="e0310552-638b-4d95-8946-87187f5815a6" containerID="b6c3f5f3875dad5f728510fd9fdb9f03cf98fc3b69d6c2d9731b8f2e6764de90" exitCode=0 Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.488222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" event={"ID":"e0310552-638b-4d95-8946-87187f5815a6","Type":"ContainerDied","Data":"b6c3f5f3875dad5f728510fd9fdb9f03cf98fc3b69d6c2d9731b8f2e6764de90"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.490313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b16088c-48ba-4c09-91b1-a0447bced81b","Type":"ContainerStarted","Data":"0cc57c9b23cc9859b6090358c04776f9e0d0e74e99a0cf807d1f29478774473d"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.498252 4869 generic.go:334] "Generic (PLEG): container finished" podID="b99674e2-9283-4ba9-bf6a-cdbfd8763ada" containerID="64e7b4f2cc7a8133dba321c146b1ad80929fe731bd5f7019d71eac721e314190" exitCode=0 Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.498296 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" event={"ID":"b99674e2-9283-4ba9-bf6a-cdbfd8763ada","Type":"ContainerDied","Data":"64e7b4f2cc7a8133dba321c146b1ad80929fe731bd5f7019d71eac721e314190"} Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.545595 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vznj2"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.604162 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.659219 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864dcb4fb5-9qqrc"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.694686 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.700770 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65d4fd8749-f7ft9"] Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.721855 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f7a929f-378d-41fd-8dfb-b332c9db0f7e" path="/var/lib/kubelet/pods/6f7a929f-378d-41fd-8dfb-b332c9db0f7e/volumes" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.723095 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba487a08-578b-4a02-a8c2-3aea942b953a" path="/var/lib/kubelet/pods/ba487a08-578b-4a02-a8c2-3aea942b953a/volumes" Mar 14 09:17:23 crc kubenswrapper[4869]: E0314 09:17:23.723813 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba487a08_578b_4a02_a8c2_3aea942b953a.slice\": RecentStats: unable to find data in memory cache]" Mar 14 09:17:23 crc kubenswrapper[4869]: I0314 09:17:23.824545 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 14 09:17:24 crc kubenswrapper[4869]: W0314 09:17:24.060120 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38c3b4a0_0639_4d3b_ae4f_3e272522326f.slice/crio-0572a42792fbeaea1158c7dbb6430d37a65695557ec4602644525f9acb399e3b WatchSource:0}: Error finding container 0572a42792fbeaea1158c7dbb6430d37a65695557ec4602644525f9acb399e3b: Status 404 returned error can't find the container with id 0572a42792fbeaea1158c7dbb6430d37a65695557ec4602644525f9acb399e3b Mar 14 09:17:24 crc kubenswrapper[4869]: I0314 09:17:24.473582 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rllnb"] Mar 14 09:17:24 crc kubenswrapper[4869]: I0314 09:17:24.508303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38c3b4a0-0639-4d3b-ae4f-3e272522326f","Type":"ContainerStarted","Data":"0572a42792fbeaea1158c7dbb6430d37a65695557ec4602644525f9acb399e3b"} Mar 14 09:17:24 crc kubenswrapper[4869]: I0314 09:17:24.510139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd","Type":"ContainerStarted","Data":"a6c6ae3359ee53a5e6c33e2c2872c4d0907cbec8a893783a1236f3f035d1b40a"} Mar 14 09:17:24 crc kubenswrapper[4869]: I0314 09:17:24.753662 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.108339 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda13efd4_046a_4059_9b04_b731f2d164b5.slice/crio-a9689aedc30f822d25dfbefc457461b5a00ffaa11042b953a841b7eb69026d9b WatchSource:0}: Error finding container a9689aedc30f822d25dfbefc457461b5a00ffaa11042b953a841b7eb69026d9b: Status 404 returned error can't find the container with id a9689aedc30f822d25dfbefc457461b5a00ffaa11042b953a841b7eb69026d9b Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.110064 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode841cbaa_b100_4321_9b08_f5725aee3408.slice/crio-4a96c2dbcb351c749708d58b1fbd9e5edb229b89087c95fd1d95f81aef51087a WatchSource:0}: Error finding container 4a96c2dbcb351c749708d58b1fbd9e5edb229b89087c95fd1d95f81aef51087a: Status 404 returned error can't find the container with id 4a96c2dbcb351c749708d58b1fbd9e5edb229b89087c95fd1d95f81aef51087a Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.119587 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42f4faa_b0db_40b7_acd5_c89f1eaf19ff.slice/crio-05412e1f9c6cce67f76f5ffa64482b9e8fc7bf30257d36d554ed6b3a5b7bf282 WatchSource:0}: Error finding container 05412e1f9c6cce67f76f5ffa64482b9e8fc7bf30257d36d554ed6b3a5b7bf282: Status 404 returned error can't find the container with id 05412e1f9c6cce67f76f5ffa64482b9e8fc7bf30257d36d554ed6b3a5b7bf282 Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.123148 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3444ec97_2efb_4a1a_b288_5b518eda928d.slice/crio-911234c334a3b006f124449233db930c5140676cff0fdc71ab6b7c2969dc461f WatchSource:0}: Error finding container 911234c334a3b006f124449233db930c5140676cff0fdc71ab6b7c2969dc461f: Status 404 returned error can't find the container with id 911234c334a3b006f124449233db930c5140676cff0fdc71ab6b7c2969dc461f Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.125261 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d3fa9ff_c502_4d77_8465_e60dea09a3a0.slice/crio-f40ee6b8b945d5e36ceae0969a8024db6c50921dde37b8e229aec70b6190b198 WatchSource:0}: Error finding container f40ee6b8b945d5e36ceae0969a8024db6c50921dde37b8e229aec70b6190b198: Status 404 returned error can't find the container with id f40ee6b8b945d5e36ceae0969a8024db6c50921dde37b8e229aec70b6190b198 Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.131427 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8735cd0_7d17_4b28_b5fb_99219798ee6f.slice/crio-5c8b75d16e75f9cf9ecaa7c4215a1f52869f13385c0745a14300963b196c3380 WatchSource:0}: Error finding container 5c8b75d16e75f9cf9ecaa7c4215a1f52869f13385c0745a14300963b196c3380: Status 404 returned error can't find the container with id 5c8b75d16e75f9cf9ecaa7c4215a1f52869f13385c0745a14300963b196c3380 Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.159473 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b048a42_637e_49e6_bdfd_ba3d574e5e4b.slice/crio-9f5900020edcddad3379996ed7fca6a27bfda92fc42624f896a48d817ae4fbb5 WatchSource:0}: Error finding container 9f5900020edcddad3379996ed7fca6a27bfda92fc42624f896a48d817ae4fbb5: Status 404 returned error can't find the container with id 9f5900020edcddad3379996ed7fca6a27bfda92fc42624f896a48d817ae4fbb5 Mar 14 09:17:25 crc kubenswrapper[4869]: W0314 09:17:25.162772 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2e84cb_3fc4_4d32_87dc_a9e81a51cea2.slice/crio-bf7ecaf66e5b4431c6c632a9a4dce13c5816932d578579ea7fa00cf596de5c93 WatchSource:0}: Error finding container bf7ecaf66e5b4431c6c632a9a4dce13c5816932d578579ea7fa00cf596de5c93: Status 404 returned error can't find the container with id bf7ecaf66e5b4431c6c632a9a4dce13c5816932d578579ea7fa00cf596de5c93 Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.214255 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.319863 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9rv\" (UniqueName: \"kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv\") pod \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.320146 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config\") pod \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.320228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc\") pod \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\" (UID: \"b99674e2-9283-4ba9-bf6a-cdbfd8763ada\") " Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.327675 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv" (OuterVolumeSpecName: "kube-api-access-qs9rv") pod "b99674e2-9283-4ba9-bf6a-cdbfd8763ada" (UID: "b99674e2-9283-4ba9-bf6a-cdbfd8763ada"). InnerVolumeSpecName "kube-api-access-qs9rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.341742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b99674e2-9283-4ba9-bf6a-cdbfd8763ada" (UID: "b99674e2-9283-4ba9-bf6a-cdbfd8763ada"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.360746 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config" (OuterVolumeSpecName: "config") pod "b99674e2-9283-4ba9-bf6a-cdbfd8763ada" (UID: "b99674e2-9283-4ba9-bf6a-cdbfd8763ada"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.422616 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9rv\" (UniqueName: \"kubernetes.io/projected/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-kube-api-access-qs9rv\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.422662 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.422675 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b99674e2-9283-4ba9-bf6a-cdbfd8763ada-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.522782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9b048a42-637e-49e6-bdfd-ba3d574e5e4b","Type":"ContainerStarted","Data":"9f5900020edcddad3379996ed7fca6a27bfda92fc42624f896a48d817ae4fbb5"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.524626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" event={"ID":"3444ec97-2efb-4a1a-b288-5b518eda928d","Type":"ContainerStarted","Data":"911234c334a3b006f124449233db930c5140676cff0fdc71ab6b7c2969dc461f"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.525880 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rllnb" event={"ID":"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2","Type":"ContainerStarted","Data":"bf7ecaf66e5b4431c6c632a9a4dce13c5816932d578579ea7fa00cf596de5c93"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.527017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2" event={"ID":"e8735cd0-7d17-4b28-b5fb-99219798ee6f","Type":"ContainerStarted","Data":"5c8b75d16e75f9cf9ecaa7c4215a1f52869f13385c0745a14300963b196c3380"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.528203 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e841cbaa-b100-4321-9b08-f5725aee3408","Type":"ContainerStarted","Data":"4a96c2dbcb351c749708d58b1fbd9e5edb229b89087c95fd1d95f81aef51087a"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.529502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da13efd4-046a-4059-9b04-b731f2d164b5","Type":"ContainerStarted","Data":"a9689aedc30f822d25dfbefc457461b5a00ffaa11042b953a841b7eb69026d9b"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.530411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff","Type":"ContainerStarted","Data":"05412e1f9c6cce67f76f5ffa64482b9e8fc7bf30257d36d554ed6b3a5b7bf282"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.532848 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.532841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675cdbc945-kvp74" event={"ID":"b99674e2-9283-4ba9-bf6a-cdbfd8763ada","Type":"ContainerDied","Data":"8aec2c7c409b1b20cdf737e5b40e7219151ca99cca8318f1d015e0260ed13b59"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.532948 4869 scope.go:117] "RemoveContainer" containerID="64e7b4f2cc7a8133dba321c146b1ad80929fe731bd5f7019d71eac721e314190" Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.535132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerStarted","Data":"f40ee6b8b945d5e36ceae0969a8024db6c50921dde37b8e229aec70b6190b198"} Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.580105 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.585765 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675cdbc945-kvp74"] Mar 14 09:17:25 crc kubenswrapper[4869]: I0314 09:17:25.715828 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99674e2-9283-4ba9-bf6a-cdbfd8763ada" path="/var/lib/kubelet/pods/b99674e2-9283-4ba9-bf6a-cdbfd8763ada/volumes" Mar 14 09:17:27 crc kubenswrapper[4869]: I0314 09:17:27.550623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" event={"ID":"e0310552-638b-4d95-8946-87187f5815a6","Type":"ContainerStarted","Data":"19d6ccd6ac78bd85bc8bf2790d7b7c44584325ca5fd72fa7bfe97a1c13049e21"} Mar 14 09:17:27 crc kubenswrapper[4869]: I0314 09:17:27.550953 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:27 crc kubenswrapper[4869]: I0314 09:17:27.555599 4869 generic.go:334] "Generic (PLEG): container finished" podID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerID="ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad" exitCode=0 Mar 14 09:17:27 crc kubenswrapper[4869]: I0314 09:17:27.555642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" event={"ID":"3444ec97-2efb-4a1a-b288-5b518eda928d","Type":"ContainerDied","Data":"ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad"} Mar 14 09:17:27 crc kubenswrapper[4869]: I0314 09:17:27.580365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" podStartSLOduration=22.334474431 podStartE2EDuration="22.580348625s" podCreationTimestamp="2026-03-14 09:17:05 +0000 UTC" firstStartedPulling="2026-03-14 09:17:22.205145272 +0000 UTC m=+1195.177427325" lastFinishedPulling="2026-03-14 09:17:22.451019456 +0000 UTC m=+1195.423301519" observedRunningTime="2026-03-14 09:17:27.577059035 +0000 UTC m=+1200.549341088" watchObservedRunningTime="2026-03-14 09:17:27.580348625 +0000 UTC m=+1200.552630678" Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.564769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da13efd4-046a-4059-9b04-b731f2d164b5","Type":"ContainerStarted","Data":"5470deb1007aa96d051d8f81bf041614709ee4d983175e2f629b99fa456c865e"} Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.566298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b16088c-48ba-4c09-91b1-a0447bced81b","Type":"ContainerStarted","Data":"1042ccc21178cc9e4bbf0d7a923ef4db1df59e4deae479512e0586fc24023f68"} Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.567605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4f89c32b-b055-4d5e-aa56-a5f41553707c","Type":"ContainerStarted","Data":"84ba2357a50d423b8e7ebabb5d5c7352b5cbbf3974931fb316350e2b35f9f561"} Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.567761 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.569406 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9735b30c-8379-4478-9460-51882d519d32","Type":"ContainerStarted","Data":"0c84ee3fb1a1cef8e52391c99da3410859c3758e3be7c7e1bbfe2febdceee9c1"} Mar 14 09:17:28 crc kubenswrapper[4869]: I0314 09:17:28.675015 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.607411299 podStartE2EDuration="18.674993416s" podCreationTimestamp="2026-03-14 09:17:10 +0000 UTC" firstStartedPulling="2026-03-14 09:17:22.715598111 +0000 UTC m=+1195.687880154" lastFinishedPulling="2026-03-14 09:17:26.783180208 +0000 UTC m=+1199.755462271" observedRunningTime="2026-03-14 09:17:28.668550378 +0000 UTC m=+1201.640832441" watchObservedRunningTime="2026-03-14 09:17:28.674993416 +0000 UTC m=+1201.647275479" Mar 14 09:17:29 crc kubenswrapper[4869]: I0314 09:17:29.577448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38c3b4a0-0639-4d3b-ae4f-3e272522326f","Type":"ContainerStarted","Data":"acb1141604bd391fc34f193e9241b4fd4fc568fdd20ae980dfb311966fd3a661"} Mar 14 09:17:29 crc kubenswrapper[4869]: I0314 09:17:29.579591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff","Type":"ContainerStarted","Data":"60153036609ef8e462623fe4b8acdf6430073a2485d7aa65e1aae4e10697e55f"} Mar 14 09:17:35 crc kubenswrapper[4869]: I0314 09:17:35.520093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 14 09:17:35 crc kubenswrapper[4869]: I0314 09:17:35.826648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.648323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rllnb" event={"ID":"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2","Type":"ContainerStarted","Data":"24a690454cbd594542ee55f04dd5eaf6cbfc31c411367601fb7fb31510831a2e"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.651656 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2" event={"ID":"e8735cd0-7d17-4b28-b5fb-99219798ee6f","Type":"ContainerStarted","Data":"a62656b48fda8f7933d88e041133b335679792951f84561101fbbc196a212c0f"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.651972 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vznj2" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.653260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9b048a42-637e-49e6-bdfd-ba3d574e5e4b","Type":"ContainerStarted","Data":"370cba1c37864cd2364144943ca8781e5bbdb8ea58cdb4766bd286decd967f3b"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.656844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd","Type":"ContainerStarted","Data":"56e97c1ac5294487a499e77ce4369a79ab53794b545c4ba6b799ca5155dcaf3f"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.656936 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.658394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e841cbaa-b100-4321-9b08-f5725aee3408","Type":"ContainerStarted","Data":"210946503cfcb864821298a6a8d53cafbf02086cf3eb7b0d2d839543fad7e409"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.661722 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" event={"ID":"3444ec97-2efb-4a1a-b288-5b518eda928d","Type":"ContainerStarted","Data":"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784"} Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.662157 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.692021 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vznj2" podStartSLOduration=12.327228284 podStartE2EDuration="22.692006361s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.154707654 +0000 UTC m=+1198.126989707" lastFinishedPulling="2026-03-14 09:17:35.519485731 +0000 UTC m=+1208.491767784" observedRunningTime="2026-03-14 09:17:37.68833185 +0000 UTC m=+1210.660613903" watchObservedRunningTime="2026-03-14 09:17:37.692006361 +0000 UTC m=+1210.664288414" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.714738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.902177195 podStartE2EDuration="25.714715029s" podCreationTimestamp="2026-03-14 09:17:12 +0000 UTC" firstStartedPulling="2026-03-14 09:17:24.05914203 +0000 UTC m=+1197.031424103" lastFinishedPulling="2026-03-14 09:17:36.871679884 +0000 UTC m=+1209.843961937" observedRunningTime="2026-03-14 09:17:37.702207091 +0000 UTC m=+1210.674489144" watchObservedRunningTime="2026-03-14 09:17:37.714715029 +0000 UTC m=+1210.686997092" Mar 14 09:17:37 crc kubenswrapper[4869]: I0314 09:17:37.735068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" podStartSLOduration=32.735044749 podStartE2EDuration="32.735044749s" podCreationTimestamp="2026-03-14 09:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:37.732343163 +0000 UTC m=+1210.704625216" watchObservedRunningTime="2026-03-14 09:17:37.735044749 +0000 UTC m=+1210.707326812" Mar 14 09:17:38 crc kubenswrapper[4869]: I0314 09:17:38.675670 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2" containerID="24a690454cbd594542ee55f04dd5eaf6cbfc31c411367601fb7fb31510831a2e" exitCode=0 Mar 14 09:17:38 crc kubenswrapper[4869]: I0314 09:17:38.675778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rllnb" event={"ID":"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2","Type":"ContainerDied","Data":"24a690454cbd594542ee55f04dd5eaf6cbfc31c411367601fb7fb31510831a2e"} Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.605156 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.605578 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.605630 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.606286 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.606372 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f" gracePeriod=600 Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.685233 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b16088c-48ba-4c09-91b1-a0447bced81b" containerID="1042ccc21178cc9e4bbf0d7a923ef4db1df59e4deae479512e0586fc24023f68" exitCode=0 Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.685307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b16088c-48ba-4c09-91b1-a0447bced81b","Type":"ContainerDied","Data":"1042ccc21178cc9e4bbf0d7a923ef4db1df59e4deae479512e0586fc24023f68"} Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.687867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rllnb" event={"ID":"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2","Type":"ContainerStarted","Data":"6fa00cda9a6b2095c5c4d60d5267c5b47f4f275f56ce6ffc7cbcf0ed6a1d05e7"} Mar 14 09:17:39 crc kubenswrapper[4869]: I0314 09:17:39.689763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerStarted","Data":"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065"} Mar 14 09:17:40 crc kubenswrapper[4869]: I0314 09:17:40.700998 4869 generic.go:334] "Generic (PLEG): container finished" podID="d42f4faa-b0db-40b7-acd5-c89f1eaf19ff" containerID="60153036609ef8e462623fe4b8acdf6430073a2485d7aa65e1aae4e10697e55f" exitCode=0 Mar 14 09:17:40 crc kubenswrapper[4869]: I0314 09:17:40.701091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff","Type":"ContainerDied","Data":"60153036609ef8e462623fe4b8acdf6430073a2485d7aa65e1aae4e10697e55f"} Mar 14 09:17:40 crc kubenswrapper[4869]: I0314 09:17:40.708985 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f" exitCode=0 Mar 14 09:17:40 crc kubenswrapper[4869]: I0314 09:17:40.710204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f"} Mar 14 09:17:40 crc kubenswrapper[4869]: I0314 09:17:40.710256 4869 scope.go:117] "RemoveContainer" containerID="6659e883f1bb6e9d0a6c6412fd0c4a00d22fe987bb78cc13d8b2e976a19f9ff0" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.717909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e841cbaa-b100-4321-9b08-f5725aee3408","Type":"ContainerStarted","Data":"a6a21c6d8b5ad14001b3014853d09d42785856a8ddfab67c3dd8ae73db0164f5"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.721325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2b16088c-48ba-4c09-91b1-a0447bced81b","Type":"ContainerStarted","Data":"7f46384627672ca88829e0b1526228eb5969629f38581f8bfd0a45b4f09a8b34"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.723826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d42f4faa-b0db-40b7-acd5-c89f1eaf19ff","Type":"ContainerStarted","Data":"91413819d298dd4866eb0a756b5cf1b2b0b8f800474bca83c0fee381e9c9686d"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.727194 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.731028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rllnb" event={"ID":"8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2","Type":"ContainerStarted","Data":"986e7e936c0a61e142cf33ef848e91c31df9154932a8d0ede5cb531fc394fe0b"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.731192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.733124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9b048a42-637e-49e6-bdfd-ba3d574e5e4b","Type":"ContainerStarted","Data":"bec76c4cc0f3c90968b328276003916f2f6fec56d0a27917bcdb34f9fb7f5007"} Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.742054 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.505627291 podStartE2EDuration="26.742029437s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.114207808 +0000 UTC m=+1198.086489861" lastFinishedPulling="2026-03-14 09:17:41.350609954 +0000 UTC m=+1214.322892007" observedRunningTime="2026-03-14 09:17:41.741341159 +0000 UTC m=+1214.713623232" watchObservedRunningTime="2026-03-14 09:17:41.742029437 +0000 UTC m=+1214.714311490" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.764307 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.554774379 podStartE2EDuration="22.764283714s" podCreationTimestamp="2026-03-14 09:17:19 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.180220051 +0000 UTC m=+1198.152502094" lastFinishedPulling="2026-03-14 09:17:41.389729376 +0000 UTC m=+1214.362011429" observedRunningTime="2026-03-14 09:17:41.761134556 +0000 UTC m=+1214.733416619" watchObservedRunningTime="2026-03-14 09:17:41.764283714 +0000 UTC m=+1214.736565767" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.789471 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rllnb" podStartSLOduration=16.54246255 podStartE2EDuration="26.789453193s" podCreationTimestamp="2026-03-14 09:17:15 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.18018287 +0000 UTC m=+1198.152464913" lastFinishedPulling="2026-03-14 09:17:35.427173503 +0000 UTC m=+1208.399455556" observedRunningTime="2026-03-14 09:17:41.785915345 +0000 UTC m=+1214.758197388" watchObservedRunningTime="2026-03-14 09:17:41.789453193 +0000 UTC m=+1214.761735246" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.815451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=33.133231556 podStartE2EDuration="34.815430821s" podCreationTimestamp="2026-03-14 09:17:07 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.122673486 +0000 UTC m=+1198.094955539" lastFinishedPulling="2026-03-14 09:17:26.804872751 +0000 UTC m=+1199.777154804" observedRunningTime="2026-03-14 09:17:41.809777292 +0000 UTC m=+1214.782059355" watchObservedRunningTime="2026-03-14 09:17:41.815430821 +0000 UTC m=+1214.787712874" Mar 14 09:17:41 crc kubenswrapper[4869]: I0314 09:17:41.840022 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.762131885 podStartE2EDuration="33.839985965s" podCreationTimestamp="2026-03-14 09:17:08 +0000 UTC" firstStartedPulling="2026-03-14 09:17:22.724155991 +0000 UTC m=+1195.696438044" lastFinishedPulling="2026-03-14 09:17:26.802010071 +0000 UTC m=+1199.774292124" observedRunningTime="2026-03-14 09:17:41.838709184 +0000 UTC m=+1214.810991267" watchObservedRunningTime="2026-03-14 09:17:41.839985965 +0000 UTC m=+1214.812268018" Mar 14 09:17:42 crc kubenswrapper[4869]: I0314 09:17:42.742934 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:17:42 crc kubenswrapper[4869]: I0314 09:17:42.958664 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.019005 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.019574 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="dnsmasq-dns" containerID="cri-o://498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784" gracePeriod=10 Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.020752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.068178 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:43 crc kubenswrapper[4869]: E0314 09:17:43.068577 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99674e2-9283-4ba9-bf6a-cdbfd8763ada" containerName="init" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.068594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99674e2-9283-4ba9-bf6a-cdbfd8763ada" containerName="init" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.068747 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99674e2-9283-4ba9-bf6a-cdbfd8763ada" containerName="init" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.069603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.103525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.142640 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.142719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.142783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.244643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.244680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.244713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.245630 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.245883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.264770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") pod \"dnsmasq-dns-64fcdd45d5-qrlv7\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.407621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.595725 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.625932 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.652954 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5hx\" (UniqueName: \"kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx\") pod \"3444ec97-2efb-4a1a-b288-5b518eda928d\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.653125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc\") pod \"3444ec97-2efb-4a1a-b288-5b518eda928d\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.653230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config\") pod \"3444ec97-2efb-4a1a-b288-5b518eda928d\" (UID: \"3444ec97-2efb-4a1a-b288-5b518eda928d\") " Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.662491 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx" (OuterVolumeSpecName: "kube-api-access-zm5hx") pod "3444ec97-2efb-4a1a-b288-5b518eda928d" (UID: "3444ec97-2efb-4a1a-b288-5b518eda928d"). InnerVolumeSpecName "kube-api-access-zm5hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.696351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.709941 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config" (OuterVolumeSpecName: "config") pod "3444ec97-2efb-4a1a-b288-5b518eda928d" (UID: "3444ec97-2efb-4a1a-b288-5b518eda928d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.728926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3444ec97-2efb-4a1a-b288-5b518eda928d" (UID: "3444ec97-2efb-4a1a-b288-5b518eda928d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.755344 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.755390 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3444ec97-2efb-4a1a-b288-5b518eda928d-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.755402 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5hx\" (UniqueName: \"kubernetes.io/projected/3444ec97-2efb-4a1a-b288-5b518eda928d-kube-api-access-zm5hx\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.759547 4869 generic.go:334] "Generic (PLEG): container finished" podID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerID="498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784" exitCode=0 Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.759599 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.759635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" event={"ID":"3444ec97-2efb-4a1a-b288-5b518eda928d","Type":"ContainerDied","Data":"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784"} Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.759709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c9969dd5-5h7jl" event={"ID":"3444ec97-2efb-4a1a-b288-5b518eda928d","Type":"ContainerDied","Data":"911234c334a3b006f124449233db930c5140676cff0fdc71ab6b7c2969dc461f"} Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.759732 4869 scope.go:117] "RemoveContainer" containerID="498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.760314 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.787020 4869 scope.go:117] "RemoveContainer" containerID="ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.799588 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.809997 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78c9969dd5-5h7jl"] Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.810861 4869 scope.go:117] "RemoveContainer" containerID="498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784" Mar 14 09:17:43 crc kubenswrapper[4869]: E0314 09:17:43.812584 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784\": container with ID starting with 498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784 not found: ID does not exist" containerID="498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.812620 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784"} err="failed to get container status \"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784\": rpc error: code = NotFound desc = could not find container \"498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784\": container with ID starting with 498f4b1c0de3bb50d6252f115586069520e27f154e2ba39adaea4b26f0435784 not found: ID does not exist" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.812651 4869 scope.go:117] "RemoveContainer" containerID="ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad" Mar 14 09:17:43 crc kubenswrapper[4869]: E0314 09:17:43.813048 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad\": container with ID starting with ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad not found: ID does not exist" containerID="ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.813067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad"} err="failed to get container status \"ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad\": rpc error: code = NotFound desc = could not find container \"ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad\": container with ID starting with ec29f9d4911b68fd74158ff0aa8439a31dd64dd04a16182ecfb278e12e7242ad not found: ID does not exist" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.827850 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 14 09:17:43 crc kubenswrapper[4869]: I0314 09:17:43.926444 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:43 crc kubenswrapper[4869]: W0314 09:17:43.928085 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda659e267_4af1_4594_94c2_7ce3e45a3515.slice/crio-d11b01c5bcc34853a56715bfd1f0fdd91d923fbb7a69c8ab51686c3209d963d0 WatchSource:0}: Error finding container d11b01c5bcc34853a56715bfd1f0fdd91d923fbb7a69c8ab51686c3209d963d0: Status 404 returned error can't find the container with id d11b01c5bcc34853a56715bfd1f0fdd91d923fbb7a69c8ab51686c3209d963d0 Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.092678 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.118418 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:44 crc kubenswrapper[4869]: E0314 09:17:44.118786 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="init" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.118807 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="init" Mar 14 09:17:44 crc kubenswrapper[4869]: E0314 09:17:44.118837 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.118843 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.118989 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" containerName="dnsmasq-dns" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.119985 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.122082 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.169195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwcc\" (UniqueName: \"kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.170099 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.170145 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.170184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.176598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.273618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwcc\" (UniqueName: \"kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.273709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.273730 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.273748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.274617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.274634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.274863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.275622 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.292615 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w9t8k"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.293496 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.295393 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwcc\" (UniqueName: \"kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc\") pod \"dnsmasq-dns-6858d48877-cb9m7\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.295883 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.300015 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.300220 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.300487 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-746kr" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.300740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.302319 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9t8k"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.302854 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.339040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.375704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-lock\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.375746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-combined-ca-bundle\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.375767 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.375793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcc4\" (UniqueName: \"kubernetes.io/projected/67f9eed2-67db-4563-8642-5da1a1198e3e-kube-api-access-rdcc4\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.375958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f9eed2-67db-4563-8642-5da1a1198e3e-config\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8nk\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-kube-api-access-lp8nk\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovn-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376407 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-cache\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.376799 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovs-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.458285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovn-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-cache\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovs-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-lock\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-combined-ca-bundle\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478663 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdcc4\" (UniqueName: \"kubernetes.io/projected/67f9eed2-67db-4563-8642-5da1a1198e3e-kube-api-access-rdcc4\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f9eed2-67db-4563-8642-5da1a1198e3e-config\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478721 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8nk\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-kube-api-access-lp8nk\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.478761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.479023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovn-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.479153 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.480020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-cache\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: E0314 09:17:44.480202 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:44 crc kubenswrapper[4869]: E0314 09:17:44.480231 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:44 crc kubenswrapper[4869]: E0314 09:17:44.480300 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:44.980268505 +0000 UTC m=+1217.952550558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.480668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/67f9eed2-67db-4563-8642-5da1a1198e3e-ovs-rundir\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.481036 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-lock\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.484708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f9eed2-67db-4563-8642-5da1a1198e3e-config\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.484910 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-combined-ca-bundle\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.485485 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/67f9eed2-67db-4563-8642-5da1a1198e3e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.485666 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.531389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdcc4\" (UniqueName: \"kubernetes.io/projected/67f9eed2-67db-4563-8642-5da1a1198e3e-kube-api-access-rdcc4\") pod \"ovn-controller-metrics-w9t8k\" (UID: \"67f9eed2-67db-4563-8642-5da1a1198e3e\") " pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.542775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.593840 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8nk\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-kube-api-access-lp8nk\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.647899 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w9t8k" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.665502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.706022 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.733891 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.739241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.754914 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.763400 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.819869 4869 generic.go:334] "Generic (PLEG): container finished" podID="a659e267-4af1-4594-94c2-7ce3e45a3515" containerID="a75d8b37e7b6b4cb3b2d31c5e514c0a01a8682e77a864f463a2d58572f2d0747" exitCode=0 Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.819958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" event={"ID":"a659e267-4af1-4594-94c2-7ce3e45a3515","Type":"ContainerDied","Data":"a75d8b37e7b6b4cb3b2d31c5e514c0a01a8682e77a864f463a2d58572f2d0747"} Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.819988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" event={"ID":"a659e267-4af1-4594-94c2-7ce3e45a3515","Type":"ContainerStarted","Data":"d11b01c5bcc34853a56715bfd1f0fdd91d923fbb7a69c8ab51686c3209d963d0"} Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.852159 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.852627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.902530 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.903231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.903375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.903487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:44 crc kubenswrapper[4869]: I0314 09:17:44.903555 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.009840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: E0314 09:17:45.010631 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:45 crc kubenswrapper[4869]: E0314 09:17:45.010666 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:45 crc kubenswrapper[4869]: E0314 09:17:45.010702 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:46.010686625 +0000 UTC m=+1218.982968678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.020023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.020051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.020191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.020372 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.038838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4\") pod \"dnsmasq-dns-5d6957795c-zgr2v\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.044339 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.139437 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.293709 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.407819 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:45 crc kubenswrapper[4869]: E0314 09:17:45.408124 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a659e267-4af1-4594-94c2-7ce3e45a3515" containerName="init" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.408137 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a659e267-4af1-4594-94c2-7ce3e45a3515" containerName="init" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.408308 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a659e267-4af1-4594-94c2-7ce3e45a3515" containerName="init" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.409134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.416928 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-flxff" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417037 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417369 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config\") pod \"a659e267-4af1-4594-94c2-7ce3e45a3515\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") pod \"a659e267-4af1-4594-94c2-7ce3e45a3515\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.417745 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc\") pod \"a659e267-4af1-4594-94c2-7ce3e45a3515\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.422266 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.521061 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.521119 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft" (OuterVolumeSpecName: "kube-api-access-pbzft") pod "a659e267-4af1-4594-94c2-7ce3e45a3515" (UID: "a659e267-4af1-4594-94c2-7ce3e45a3515"). InnerVolumeSpecName "kube-api-access-pbzft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.522272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") pod \"a659e267-4af1-4594-94c2-7ce3e45a3515\" (UID: \"a659e267-4af1-4594-94c2-7ce3e45a3515\") " Mar 14 09:17:45 crc kubenswrapper[4869]: W0314 09:17:45.522531 4869 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a659e267-4af1-4594-94c2-7ce3e45a3515/volumes/kubernetes.io~projected/kube-api-access-pbzft Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.522797 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft" (OuterVolumeSpecName: "kube-api-access-pbzft") pod "a659e267-4af1-4594-94c2-7ce3e45a3515" (UID: "a659e267-4af1-4594-94c2-7ce3e45a3515"). InnerVolumeSpecName "kube-api-access-pbzft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.522899 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.522970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-config\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523123 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-scripts\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7mld\" (UniqueName: \"kubernetes.io/projected/278ce4ec-200c-403d-b2a5-b69101f3e5aa-kube-api-access-s7mld\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.523351 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbzft\" (UniqueName: \"kubernetes.io/projected/a659e267-4af1-4594-94c2-7ce3e45a3515-kube-api-access-pbzft\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.525531 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a659e267-4af1-4594-94c2-7ce3e45a3515" (UID: "a659e267-4af1-4594-94c2-7ce3e45a3515"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.526930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config" (OuterVolumeSpecName: "config") pod "a659e267-4af1-4594-94c2-7ce3e45a3515" (UID: "a659e267-4af1-4594-94c2-7ce3e45a3515"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.600803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w9t8k"] Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.624717 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.624884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.624908 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.624938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.624982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-config\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.625005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-scripts\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.625035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7mld\" (UniqueName: \"kubernetes.io/projected/278ce4ec-200c-403d-b2a5-b69101f3e5aa-kube-api-access-s7mld\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.625083 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.625093 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a659e267-4af1-4594-94c2-7ce3e45a3515-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.629218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.630124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-config\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.630186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/278ce4ec-200c-403d-b2a5-b69101f3e5aa-scripts\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.630452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.632149 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.634579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278ce4ec-200c-403d-b2a5-b69101f3e5aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.642255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7mld\" (UniqueName: \"kubernetes.io/projected/278ce4ec-200c-403d-b2a5-b69101f3e5aa-kube-api-access-s7mld\") pod \"ovn-northd-0\" (UID: \"278ce4ec-200c-403d-b2a5-b69101f3e5aa\") " pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.727189 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3444ec97-2efb-4a1a-b288-5b518eda928d" path="/var/lib/kubelet/pods/3444ec97-2efb-4a1a-b288-5b518eda928d/volumes" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.769631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.813802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.852787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" event={"ID":"48035592-e6a5-424f-873e-5bfb77db4f85","Type":"ContainerStarted","Data":"7a49cfbd8fd3252911a493fa5fa87d3947e60bb0364ca35f55c468b3b890f27e"} Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.866227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" event={"ID":"a659e267-4af1-4594-94c2-7ce3e45a3515","Type":"ContainerDied","Data":"d11b01c5bcc34853a56715bfd1f0fdd91d923fbb7a69c8ab51686c3209d963d0"} Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.866275 4869 scope.go:117] "RemoveContainer" containerID="a75d8b37e7b6b4cb3b2d31c5e514c0a01a8682e77a864f463a2d58572f2d0747" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.866892 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcdd45d5-qrlv7" Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.872795 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" event={"ID":"29d98eb6-9de3-43b9-99fc-21858f58fe40","Type":"ContainerStarted","Data":"ff08bcbf7e74588a049e28879e738f9be3f10cc375afdeb4d27b7f6c8ea4336c"} Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.880076 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerID="b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065" exitCode=0 Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.880141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerDied","Data":"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065"} Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.882694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9t8k" event={"ID":"67f9eed2-67db-4563-8642-5da1a1198e3e","Type":"ContainerStarted","Data":"943ca5e75d97fdc28362bcc77dc25b15628b3b19c013befb76cbf33fb4e9d304"} Mar 14 09:17:45 crc kubenswrapper[4869]: I0314 09:17:45.998749 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.022655 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64fcdd45d5-qrlv7"] Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.045101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:46 crc kubenswrapper[4869]: E0314 09:17:46.045430 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:46 crc kubenswrapper[4869]: E0314 09:17:46.045467 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:46 crc kubenswrapper[4869]: E0314 09:17:46.045545 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:48.045525566 +0000 UTC m=+1221.017807619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.348916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.911172 4869 generic.go:334] "Generic (PLEG): container finished" podID="29d98eb6-9de3-43b9-99fc-21858f58fe40" containerID="62a5c1114f04242d0b8fd579df646943bc7d4ee493227c1b4a5425e6599fdcb8" exitCode=0 Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.911282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" event={"ID":"29d98eb6-9de3-43b9-99fc-21858f58fe40","Type":"ContainerDied","Data":"62a5c1114f04242d0b8fd579df646943bc7d4ee493227c1b4a5425e6599fdcb8"} Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.927907 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"278ce4ec-200c-403d-b2a5-b69101f3e5aa","Type":"ContainerStarted","Data":"ce562bb6c9b417081d97d3ca467f6d145a23032051dcc2b584c9d7c6224e2efb"} Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.950899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w9t8k" event={"ID":"67f9eed2-67db-4563-8642-5da1a1198e3e","Type":"ContainerStarted","Data":"3af677ea3b87179769d0feba920a234b00fa61b5a8a038619ab7834289d40f4b"} Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.975359 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w9t8k" podStartSLOduration=2.975340084 podStartE2EDuration="2.975340084s" podCreationTimestamp="2026-03-14 09:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:46.969931502 +0000 UTC m=+1219.942213575" watchObservedRunningTime="2026-03-14 09:17:46.975340084 +0000 UTC m=+1219.947622137" Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.981163 4869 generic.go:334] "Generic (PLEG): container finished" podID="48035592-e6a5-424f-873e-5bfb77db4f85" containerID="6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80" exitCode=0 Mar 14 09:17:46 crc kubenswrapper[4869]: I0314 09:17:46.981420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" event={"ID":"48035592-e6a5-424f-873e-5bfb77db4f85","Type":"ContainerDied","Data":"6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80"} Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.738390 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a659e267-4af1-4594-94c2-7ce3e45a3515" path="/var/lib/kubelet/pods/a659e267-4af1-4594-94c2-7ce3e45a3515/volumes" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.770145 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.779207 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc\") pod \"29d98eb6-9de3-43b9-99fc-21858f58fe40\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.779649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb\") pod \"29d98eb6-9de3-43b9-99fc-21858f58fe40\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.779738 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config\") pod \"29d98eb6-9de3-43b9-99fc-21858f58fe40\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.779819 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwcc\" (UniqueName: \"kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc\") pod \"29d98eb6-9de3-43b9-99fc-21858f58fe40\" (UID: \"29d98eb6-9de3-43b9-99fc-21858f58fe40\") " Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.816783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc" (OuterVolumeSpecName: "kube-api-access-kzwcc") pod "29d98eb6-9de3-43b9-99fc-21858f58fe40" (UID: "29d98eb6-9de3-43b9-99fc-21858f58fe40"). InnerVolumeSpecName "kube-api-access-kzwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.818115 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29d98eb6-9de3-43b9-99fc-21858f58fe40" (UID: "29d98eb6-9de3-43b9-99fc-21858f58fe40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.833877 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29d98eb6-9de3-43b9-99fc-21858f58fe40" (UID: "29d98eb6-9de3-43b9-99fc-21858f58fe40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.833969 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config" (OuterVolumeSpecName: "config") pod "29d98eb6-9de3-43b9-99fc-21858f58fe40" (UID: "29d98eb6-9de3-43b9-99fc-21858f58fe40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.881711 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwcc\" (UniqueName: \"kubernetes.io/projected/29d98eb6-9de3-43b9-99fc-21858f58fe40-kube-api-access-kzwcc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.881748 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.881761 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:47 crc kubenswrapper[4869]: I0314 09:17:47.881772 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d98eb6-9de3-43b9-99fc-21858f58fe40-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.018003 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" event={"ID":"29d98eb6-9de3-43b9-99fc-21858f58fe40","Type":"ContainerDied","Data":"ff08bcbf7e74588a049e28879e738f9be3f10cc375afdeb4d27b7f6c8ea4336c"} Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.018056 4869 scope.go:117] "RemoveContainer" containerID="62a5c1114f04242d0b8fd579df646943bc7d4ee493227c1b4a5425e6599fdcb8" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.018165 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6858d48877-cb9m7" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.033595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"278ce4ec-200c-403d-b2a5-b69101f3e5aa","Type":"ContainerStarted","Data":"bc45b14bc722e4454421186513705b2be034001cf25bc745f1c6d147674b6ce4"} Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.084871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:48 crc kubenswrapper[4869]: E0314 09:17:48.085044 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:48 crc kubenswrapper[4869]: E0314 09:17:48.085067 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:48 crc kubenswrapper[4869]: E0314 09:17:48.085112 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:17:52.085097007 +0000 UTC m=+1225.057379060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.149776 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.165686 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6858d48877-cb9m7"] Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.184841 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ql8h6"] Mar 14 09:17:48 crc kubenswrapper[4869]: E0314 09:17:48.185288 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d98eb6-9de3-43b9-99fc-21858f58fe40" containerName="init" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.185312 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d98eb6-9de3-43b9-99fc-21858f58fe40" containerName="init" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.185524 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d98eb6-9de3-43b9-99fc-21858f58fe40" containerName="init" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.186229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.189360 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.189497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.189575 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.193034 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ql8h6"] Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.289500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.289824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.289871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.289937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.290213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.290405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.290535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nldhr\" (UniqueName: \"kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nldhr\" (UniqueName: \"kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392202 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.392301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.394105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.394399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.394995 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.398347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.398853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.400212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.415189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nldhr\" (UniqueName: \"kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr\") pod \"swift-ring-rebalance-ql8h6\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.517668 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.740497 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 14 09:17:48 crc kubenswrapper[4869]: I0314 09:17:48.741011 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.014898 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ql8h6"] Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.052658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" event={"ID":"48035592-e6a5-424f-873e-5bfb77db4f85","Type":"ContainerStarted","Data":"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a"} Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.052853 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.057642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"278ce4ec-200c-403d-b2a5-b69101f3e5aa","Type":"ContainerStarted","Data":"9e0f22842f5ff43c4f472a4df1cd9e625c142d61b84dda89775713e23eebb9f5"} Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.057764 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.059488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ql8h6" event={"ID":"1321f800-bd9a-41b6-9bfc-b4f48a644230","Type":"ContainerStarted","Data":"459b7619d364d67e99fafba62b2d91230d00a293f00208c863e617374d877919"} Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.071289 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" podStartSLOduration=5.071272921 podStartE2EDuration="5.071272921s" podCreationTimestamp="2026-03-14 09:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:17:49.068357709 +0000 UTC m=+1222.040639772" watchObservedRunningTime="2026-03-14 09:17:49.071272921 +0000 UTC m=+1222.043554984" Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.096356 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.8606985910000002 podStartE2EDuration="4.096337917s" podCreationTimestamp="2026-03-14 09:17:45 +0000 UTC" firstStartedPulling="2026-03-14 09:17:46.35848773 +0000 UTC m=+1219.330769793" lastFinishedPulling="2026-03-14 09:17:47.594127066 +0000 UTC m=+1220.566409119" observedRunningTime="2026-03-14 09:17:49.089851258 +0000 UTC m=+1222.062133311" watchObservedRunningTime="2026-03-14 09:17:49.096337917 +0000 UTC m=+1222.068619970" Mar 14 09:17:49 crc kubenswrapper[4869]: I0314 09:17:49.715714 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d98eb6-9de3-43b9-99fc-21858f58fe40" path="/var/lib/kubelet/pods/29d98eb6-9de3-43b9-99fc-21858f58fe40/volumes" Mar 14 09:17:50 crc kubenswrapper[4869]: I0314 09:17:50.184853 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:50 crc kubenswrapper[4869]: I0314 09:17:50.185233 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:50 crc kubenswrapper[4869]: I0314 09:17:50.307522 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:51 crc kubenswrapper[4869]: I0314 09:17:51.200415 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 14 09:17:52 crc kubenswrapper[4869]: I0314 09:17:52.168999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:17:52 crc kubenswrapper[4869]: E0314 09:17:52.169426 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:17:52 crc kubenswrapper[4869]: E0314 09:17:52.169445 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:17:52 crc kubenswrapper[4869]: E0314 09:17:52.169494 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:18:00.169476667 +0000 UTC m=+1233.141758720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:17:52 crc kubenswrapper[4869]: I0314 09:17:52.342575 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 14 09:17:52 crc kubenswrapper[4869]: I0314 09:17:52.690298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.142377 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-5fe4-account-create-update-fzp8k"] Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.144035 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.147143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.158737 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-9v9bw"] Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.160058 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.166786 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-9v9bw"] Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.177575 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-5fe4-account-create-update-fzp8k"] Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.289620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rr7\" (UniqueName: \"kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.289722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc489\" (UniqueName: \"kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.289782 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.289818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.390860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rr7\" (UniqueName: \"kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.391024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc489\" (UniqueName: \"kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.391104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.391151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.392029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.392128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.410392 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc489\" (UniqueName: \"kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489\") pod \"watcher-5fe4-account-create-update-fzp8k\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.413422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rr7\" (UniqueName: \"kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7\") pod \"watcher-db-create-9v9bw\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.494729 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:17:53 crc kubenswrapper[4869]: I0314 09:17:53.504067 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-9v9bw" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.141662 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.199044 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.199265 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="dnsmasq-dns" containerID="cri-o://19d6ccd6ac78bd85bc8bf2790d7b7c44584325ca5fd72fa7bfe97a1c13049e21" gracePeriod=10 Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.686680 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9g9kc"] Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.687907 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.695847 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9g9kc"] Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.826467 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.832434 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0e4e-account-create-update-gt57r"] Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.835192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.839884 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.840036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj466\" (UniqueName: \"kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.840674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.841577 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e4e-account-create-update-gt57r"] Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.942556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkqn\" (UniqueName: \"kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.942785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.942833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj466\" (UniqueName: \"kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.942868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.943773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:55 crc kubenswrapper[4869]: I0314 09:17:55.961858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj466\" (UniqueName: \"kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466\") pod \"glance-db-create-9g9kc\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.002954 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9g9kc" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.044765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkqn\" (UniqueName: \"kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.044927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.045613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.062683 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkqn\" (UniqueName: \"kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn\") pod \"glance-0e4e-account-create-update-gt57r\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.130186 4869 generic.go:334] "Generic (PLEG): container finished" podID="e0310552-638b-4d95-8946-87187f5815a6" containerID="19d6ccd6ac78bd85bc8bf2790d7b7c44584325ca5fd72fa7bfe97a1c13049e21" exitCode=0 Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.130229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" event={"ID":"e0310552-638b-4d95-8946-87187f5815a6","Type":"ContainerDied","Data":"19d6ccd6ac78bd85bc8bf2790d7b7c44584325ca5fd72fa7bfe97a1c13049e21"} Mar 14 09:17:56 crc kubenswrapper[4869]: I0314 09:17:56.156027 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.399588 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4q5tx"] Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.402942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.406416 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.416712 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4q5tx"] Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.574627 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.575129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46vsk\" (UniqueName: \"kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.677370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.677544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46vsk\" (UniqueName: \"kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.678324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.696972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46vsk\" (UniqueName: \"kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk\") pod \"root-account-create-update-4q5tx\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:57 crc kubenswrapper[4869]: I0314 09:17:57.729871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q5tx" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.521970 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.622381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9fw6\" (UniqueName: \"kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6\") pod \"e0310552-638b-4d95-8946-87187f5815a6\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.622871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc\") pod \"e0310552-638b-4d95-8946-87187f5815a6\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.622926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config\") pod \"e0310552-638b-4d95-8946-87187f5815a6\" (UID: \"e0310552-638b-4d95-8946-87187f5815a6\") " Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.653942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6" (OuterVolumeSpecName: "kube-api-access-f9fw6") pod "e0310552-638b-4d95-8946-87187f5815a6" (UID: "e0310552-638b-4d95-8946-87187f5815a6"). InnerVolumeSpecName "kube-api-access-f9fw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.681183 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0310552-638b-4d95-8946-87187f5815a6" (UID: "e0310552-638b-4d95-8946-87187f5815a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.698646 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config" (OuterVolumeSpecName: "config") pod "e0310552-638b-4d95-8946-87187f5815a6" (UID: "e0310552-638b-4d95-8946-87187f5815a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.729487 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9fw6\" (UniqueName: \"kubernetes.io/projected/e0310552-638b-4d95-8946-87187f5815a6-kube-api-access-f9fw6\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.729555 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.729564 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0310552-638b-4d95-8946-87187f5815a6-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.915593 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-5fe4-account-create-update-fzp8k"] Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.924049 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4q5tx"] Mar 14 09:17:59 crc kubenswrapper[4869]: W0314 09:17:59.928198 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd454e5c2_25a0_4404_9447_10485e0adea0.slice/crio-8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318 WatchSource:0}: Error finding container 8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318: Status 404 returned error can't find the container with id 8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318 Mar 14 09:17:59 crc kubenswrapper[4869]: W0314 09:17:59.933372 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb74b0055_dff6_4cff_82ec_6fd1abdc5a9c.slice/crio-277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0 WatchSource:0}: Error finding container 277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0: Status 404 returned error can't find the container with id 277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0 Mar 14 09:17:59 crc kubenswrapper[4869]: I0314 09:17:59.941889 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0e4e-account-create-update-gt57r"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.103243 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-9v9bw"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.118562 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9g9kc"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.146863 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29557998-725qn"] Mar 14 09:18:00 crc kubenswrapper[4869]: E0314 09:18:00.147328 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="init" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.147352 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="init" Mar 14 09:18:00 crc kubenswrapper[4869]: E0314 09:18:00.147395 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="dnsmasq-dns" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.147405 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="dnsmasq-dns" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.147638 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0310552-638b-4d95-8946-87187f5815a6" containerName="dnsmasq-dns" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.148435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.150956 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.151231 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.151380 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.159717 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-725qn"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.166075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerStarted","Data":"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.168107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e4e-account-create-update-gt57r" event={"ID":"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c","Type":"ContainerStarted","Data":"5a9d402087bc1888f2beebc6b3a722abb82b89c47f8f154e4db8ed56747f36ad"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.168137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e4e-account-create-update-gt57r" event={"ID":"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c","Type":"ContainerStarted","Data":"277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.170012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ql8h6" event={"ID":"1321f800-bd9a-41b6-9bfc-b4f48a644230","Type":"ContainerStarted","Data":"cf7c7b2589e8fad9b7cc7b5b8be75c39ef5bb0d78af7e4ce4ed80b5655a80a56"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.172934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-9v9bw" event={"ID":"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1","Type":"ContainerStarted","Data":"55aae1e7fad4920c9e7b38a1998ac0619a80ec64088e855df1be6ba8ddee0079"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.187104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-5fe4-account-create-update-fzp8k" event={"ID":"97e55491-9c61-49eb-84fb-38ada8084c67","Type":"ContainerStarted","Data":"1d5973706cef0fda67ef57492bd9c5ab9604008148aa56e15643ed30f99150e4"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.192054 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.192340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c99478f-pn2sb" event={"ID":"e0310552-638b-4d95-8946-87187f5815a6","Type":"ContainerDied","Data":"19d4125657fed37183fe706cbe4ddbb5fb1996014c7e0a35a125b6e42f364c7a"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.192532 4869 scope.go:117] "RemoveContainer" containerID="19d6ccd6ac78bd85bc8bf2790d7b7c44584325ca5fd72fa7bfe97a1c13049e21" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.195978 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q5tx" event={"ID":"d454e5c2-25a0-4404-9447-10485e0adea0","Type":"ContainerStarted","Data":"8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318"} Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.226469 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0e4e-account-create-update-gt57r" podStartSLOduration=5.226435068 podStartE2EDuration="5.226435068s" podCreationTimestamp="2026-03-14 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:00.20339049 +0000 UTC m=+1233.175672543" watchObservedRunningTime="2026-03-14 09:18:00.226435068 +0000 UTC m=+1233.198717121" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.237418 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdgz\" (UniqueName: \"kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz\") pod \"auto-csr-approver-29557998-725qn\" (UID: \"d22c0c3e-3573-40f6-8bd9-000533db9955\") " pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.237498 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:18:00 crc kubenswrapper[4869]: E0314 09:18:00.237620 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 14 09:18:00 crc kubenswrapper[4869]: E0314 09:18:00.237633 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 14 09:18:00 crc kubenswrapper[4869]: E0314 09:18:00.237674 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift podName:8089ea8f-74c0-4fa4-93bd-dc107394a9e5 nodeName:}" failed. No retries permitted until 2026-03-14 09:18:16.237661606 +0000 UTC m=+1249.209943649 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift") pod "swift-storage-0" (UID: "8089ea8f-74c0-4fa4-93bd-dc107394a9e5") : configmap "swift-ring-files" not found Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.244265 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ql8h6" podStartSLOduration=1.9815022199999999 podStartE2EDuration="12.244236838s" podCreationTimestamp="2026-03-14 09:17:48 +0000 UTC" firstStartedPulling="2026-03-14 09:17:49.025619519 +0000 UTC m=+1221.997901572" lastFinishedPulling="2026-03-14 09:17:59.288354137 +0000 UTC m=+1232.260636190" observedRunningTime="2026-03-14 09:18:00.22448013 +0000 UTC m=+1233.196762183" watchObservedRunningTime="2026-03-14 09:18:00.244236838 +0000 UTC m=+1233.216518891" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.339638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdgz\" (UniqueName: \"kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz\") pod \"auto-csr-approver-29557998-725qn\" (UID: \"d22c0c3e-3573-40f6-8bd9-000533db9955\") " pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.360348 4869 scope.go:117] "RemoveContainer" containerID="b6c3f5f3875dad5f728510fd9fdb9f03cf98fc3b69d6c2d9731b8f2e6764de90" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.366294 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdgz\" (UniqueName: \"kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz\") pod \"auto-csr-approver-29557998-725qn\" (UID: \"d22c0c3e-3573-40f6-8bd9-000533db9955\") " pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.527594 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.541676 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7c99478f-pn2sb"] Mar 14 09:18:00 crc kubenswrapper[4869]: I0314 09:18:00.690913 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.150056 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-725qn"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.208322 4869 generic.go:334] "Generic (PLEG): container finished" podID="d454e5c2-25a0-4404-9447-10485e0adea0" containerID="4217d63593d81e73c201fffcb2505d02ff5848fab2e75c31cedbb41324918ecd" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.208440 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q5tx" event={"ID":"d454e5c2-25a0-4404-9447-10485e0adea0","Type":"ContainerDied","Data":"4217d63593d81e73c201fffcb2505d02ff5848fab2e75c31cedbb41324918ecd"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.209835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-725qn" event={"ID":"d22c0c3e-3573-40f6-8bd9-000533db9955","Type":"ContainerStarted","Data":"7435b4ff9af89fc8dc18ab470ca5df605ed86768f5796416e2669d41ab868ba2"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.211772 4869 generic.go:334] "Generic (PLEG): container finished" podID="da13efd4-046a-4059-9b04-b731f2d164b5" containerID="5470deb1007aa96d051d8f81bf041614709ee4d983175e2f629b99fa456c865e" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.211828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da13efd4-046a-4059-9b04-b731f2d164b5","Type":"ContainerDied","Data":"5470deb1007aa96d051d8f81bf041614709ee4d983175e2f629b99fa456c865e"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.227255 4869 generic.go:334] "Generic (PLEG): container finished" podID="9735b30c-8379-4478-9460-51882d519d32" containerID="0c84ee3fb1a1cef8e52391c99da3410859c3758e3be7c7e1bbfe2febdceee9c1" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.227416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9735b30c-8379-4478-9460-51882d519d32","Type":"ContainerDied","Data":"0c84ee3fb1a1cef8e52391c99da3410859c3758e3be7c7e1bbfe2febdceee9c1"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.229849 4869 generic.go:334] "Generic (PLEG): container finished" podID="97e55491-9c61-49eb-84fb-38ada8084c67" containerID="3ee9531a5f9fbf93f8156f8bc75990c5094d8de53466253ccabee6c719cec9dd" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.229930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-5fe4-account-create-update-fzp8k" event={"ID":"97e55491-9c61-49eb-84fb-38ada8084c67","Type":"ContainerDied","Data":"3ee9531a5f9fbf93f8156f8bc75990c5094d8de53466253ccabee6c719cec9dd"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.234278 4869 generic.go:334] "Generic (PLEG): container finished" podID="b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" containerID="5a9d402087bc1888f2beebc6b3a722abb82b89c47f8f154e4db8ed56747f36ad" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.234338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e4e-account-create-update-gt57r" event={"ID":"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c","Type":"ContainerDied","Data":"5a9d402087bc1888f2beebc6b3a722abb82b89c47f8f154e4db8ed56747f36ad"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.240142 4869 generic.go:334] "Generic (PLEG): container finished" podID="0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" containerID="4ab4198b7bffa9e702dc41d03b1020180b04c26f5151bdc3c0f13fb862589185" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.240211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-9v9bw" event={"ID":"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1","Type":"ContainerDied","Data":"4ab4198b7bffa9e702dc41d03b1020180b04c26f5151bdc3c0f13fb862589185"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.241607 4869 generic.go:334] "Generic (PLEG): container finished" podID="012663ea-c91c-4157-b2e3-a11a65a9a6d1" containerID="15b264684b3a6ff3daed674a759454b5a2c1ebba769df0ffa2f82711eee80446" exitCode=0 Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.242351 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9g9kc" event={"ID":"012663ea-c91c-4157-b2e3-a11a65a9a6d1","Type":"ContainerDied","Data":"15b264684b3a6ff3daed674a759454b5a2c1ebba769df0ffa2f82711eee80446"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.242413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9g9kc" event={"ID":"012663ea-c91c-4157-b2e3-a11a65a9a6d1","Type":"ContainerStarted","Data":"040ea67d9a1f95a473ce372abff5aa34f699e1c872971ad5accbd2a85ae89c90"} Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.642865 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rcj56"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.645255 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.657621 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcj56"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.724175 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0310552-638b-4d95-8946-87187f5815a6" path="/var/lib/kubelet/pods/e0310552-638b-4d95-8946-87187f5815a6/volumes" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.737593 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69f0-account-create-update-pgvcz"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.738922 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.740982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.751201 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69f0-account-create-update-pgvcz"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.767473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.767539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxlt\" (UniqueName: \"kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.866384 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-d9nfj"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.867858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.868732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.868936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.868993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxlt\" (UniqueName: \"kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.869025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf5b\" (UniqueName: \"kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.871651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.873155 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d9nfj"] Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.889070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxlt\" (UniqueName: \"kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt\") pod \"keystone-db-create-rcj56\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.970634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.970827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69pt\" (UniqueName: \"kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.970899 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.970928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgf5b\" (UniqueName: \"kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:01 crc kubenswrapper[4869]: I0314 09:18:01.972087 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.037282 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d6d5-account-create-update-rd5gj"] Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.038756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.042128 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.045165 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d6d5-account-create-update-rd5gj"] Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.072026 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p69pt\" (UniqueName: \"kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.072096 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.072943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.097236 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69pt\" (UniqueName: \"kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt\") pod \"placement-db-create-d9nfj\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.097677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgf5b\" (UniqueName: \"kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b\") pod \"keystone-69f0-account-create-update-pgvcz\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.174664 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7sn\" (UniqueName: \"kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.174927 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.250654 4869 generic.go:334] "Generic (PLEG): container finished" podID="38c3b4a0-0639-4d3b-ae4f-3e272522326f" containerID="acb1141604bd391fc34f193e9241b4fd4fc568fdd20ae980dfb311966fd3a661" exitCode=0 Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.250771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38c3b4a0-0639-4d3b-ae4f-3e272522326f","Type":"ContainerDied","Data":"acb1141604bd391fc34f193e9241b4fd4fc568fdd20ae980dfb311966fd3a661"} Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.284426 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7sn\" (UniqueName: \"kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.284863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.286020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.322062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7sn\" (UniqueName: \"kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn\") pod \"placement-d6d5-account-create-update-rd5gj\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.539171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.588890 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.589817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.604478 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.665422 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.802174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts\") pod \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.802291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkqn\" (UniqueName: \"kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn\") pod \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\" (UID: \"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c\") " Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.805214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" (UID: "b74b0055-dff6-4cff-82ec-6fd1abdc5a9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.810743 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn" (OuterVolumeSpecName: "kube-api-access-pfkqn") pod "b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" (UID: "b74b0055-dff6-4cff-82ec-6fd1abdc5a9c"). InnerVolumeSpecName "kube-api-access-pfkqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.872044 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9g9kc" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.904790 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkqn\" (UniqueName: \"kubernetes.io/projected/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-kube-api-access-pfkqn\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.904844 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.923233 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.934133 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-9v9bw" Mar 14 09:18:02 crc kubenswrapper[4869]: I0314 09:18:02.943818 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q5tx" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts\") pod \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj466\" (UniqueName: \"kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466\") pod \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006153 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts\") pod \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\" (UID: \"012663ea-c91c-4157-b2e3-a11a65a9a6d1\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006243 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts\") pod \"97e55491-9c61-49eb-84fb-38ada8084c67\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006335 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc489\" (UniqueName: \"kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489\") pod \"97e55491-9c61-49eb-84fb-38ada8084c67\" (UID: \"97e55491-9c61-49eb-84fb-38ada8084c67\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006415 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5rr7\" (UniqueName: \"kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7\") pod \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\" (UID: \"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.006919 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" (UID: "0d18cddc-84fe-40cd-87c2-041c2f7bcaa1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.007453 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "012663ea-c91c-4157-b2e3-a11a65a9a6d1" (UID: "012663ea-c91c-4157-b2e3-a11a65a9a6d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.007559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97e55491-9c61-49eb-84fb-38ada8084c67" (UID: "97e55491-9c61-49eb-84fb-38ada8084c67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.011286 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466" (OuterVolumeSpecName: "kube-api-access-pj466") pod "012663ea-c91c-4157-b2e3-a11a65a9a6d1" (UID: "012663ea-c91c-4157-b2e3-a11a65a9a6d1"). InnerVolumeSpecName "kube-api-access-pj466". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.011828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7" (OuterVolumeSpecName: "kube-api-access-b5rr7") pod "0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" (UID: "0d18cddc-84fe-40cd-87c2-041c2f7bcaa1"). InnerVolumeSpecName "kube-api-access-b5rr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.012137 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489" (OuterVolumeSpecName: "kube-api-access-wc489") pod "97e55491-9c61-49eb-84fb-38ada8084c67" (UID: "97e55491-9c61-49eb-84fb-38ada8084c67"). InnerVolumeSpecName "kube-api-access-wc489". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46vsk\" (UniqueName: \"kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk\") pod \"d454e5c2-25a0-4404-9447-10485e0adea0\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts\") pod \"d454e5c2-25a0-4404-9447-10485e0adea0\" (UID: \"d454e5c2-25a0-4404-9447-10485e0adea0\") " Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108814 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e55491-9c61-49eb-84fb-38ada8084c67-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108835 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc489\" (UniqueName: \"kubernetes.io/projected/97e55491-9c61-49eb-84fb-38ada8084c67-kube-api-access-wc489\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108850 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5rr7\" (UniqueName: \"kubernetes.io/projected/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-kube-api-access-b5rr7\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108861 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108871 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj466\" (UniqueName: \"kubernetes.io/projected/012663ea-c91c-4157-b2e3-a11a65a9a6d1-kube-api-access-pj466\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.108882 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/012663ea-c91c-4157-b2e3-a11a65a9a6d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.112663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk" (OuterVolumeSpecName: "kube-api-access-46vsk") pod "d454e5c2-25a0-4404-9447-10485e0adea0" (UID: "d454e5c2-25a0-4404-9447-10485e0adea0"). InnerVolumeSpecName "kube-api-access-46vsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.210921 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46vsk\" (UniqueName: \"kubernetes.io/projected/d454e5c2-25a0-4404-9447-10485e0adea0-kube-api-access-46vsk\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:03 crc kubenswrapper[4869]: W0314 09:18:03.280075 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33e29751_96de_4f9a_9756_6bde3535c6ee.slice/crio-00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607 WatchSource:0}: Error finding container 00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607: Status 404 returned error can't find the container with id 00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607 Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.283746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerStarted","Data":"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.289153 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0e4e-account-create-update-gt57r" Mar 14 09:18:03 crc kubenswrapper[4869]: W0314 09:18:03.289638 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c24332_9232_4665_a910_640c344ea424.slice/crio-4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5 WatchSource:0}: Error finding container 4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5: Status 404 returned error can't find the container with id 4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5 Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.289691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0e4e-account-create-update-gt57r" event={"ID":"b74b0055-dff6-4cff-82ec-6fd1abdc5a9c","Type":"ContainerDied","Data":"277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.289721 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="277665cfbec183303f5377843cef50f4356393dd7a75ad564e80e86acb3f95a0" Mar 14 09:18:03 crc kubenswrapper[4869]: W0314 09:18:03.290617 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb9d5689_4433_473b_9f9b_edd43281b328.slice/crio-800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253 WatchSource:0}: Error finding container 800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253: Status 404 returned error can't find the container with id 800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253 Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.293815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-9v9bw" event={"ID":"0d18cddc-84fe-40cd-87c2-041c2f7bcaa1","Type":"ContainerDied","Data":"55aae1e7fad4920c9e7b38a1998ac0619a80ec64088e855df1be6ba8ddee0079"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.293850 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55aae1e7fad4920c9e7b38a1998ac0619a80ec64088e855df1be6ba8ddee0079" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.293968 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-9v9bw" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.297764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-5fe4-account-create-update-fzp8k" event={"ID":"97e55491-9c61-49eb-84fb-38ada8084c67","Type":"ContainerDied","Data":"1d5973706cef0fda67ef57492bd9c5ab9604008148aa56e15643ed30f99150e4"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.297824 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d5973706cef0fda67ef57492bd9c5ab9604008148aa56e15643ed30f99150e4" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.297875 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-5fe4-account-create-update-fzp8k" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.300826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9g9kc" event={"ID":"012663ea-c91c-4157-b2e3-a11a65a9a6d1","Type":"ContainerDied","Data":"040ea67d9a1f95a473ce372abff5aa34f699e1c872971ad5accbd2a85ae89c90"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.300838 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9g9kc" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.300865 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="040ea67d9a1f95a473ce372abff5aa34f699e1c872971ad5accbd2a85ae89c90" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.303211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q5tx" event={"ID":"d454e5c2-25a0-4404-9447-10485e0adea0","Type":"ContainerDied","Data":"8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318"} Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.303254 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8e2f4b69cc1732c66349adc074feb6ad6916be8936cd86c2df2f2ba1f18318" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.303314 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q5tx" Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.318533 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rcj56"] Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.325719 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d6d5-account-create-update-rd5gj"] Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.333594 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69f0-account-create-update-pgvcz"] Mar 14 09:18:03 crc kubenswrapper[4869]: I0314 09:18:03.442527 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d9nfj"] Mar 14 09:18:04 crc kubenswrapper[4869]: I0314 09:18:04.311995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69f0-account-create-update-pgvcz" event={"ID":"d9c24332-9232-4665-a910-640c344ea424","Type":"ContainerStarted","Data":"4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5"} Mar 14 09:18:04 crc kubenswrapper[4869]: I0314 09:18:04.313207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d6d5-account-create-update-rd5gj" event={"ID":"cb9d5689-4433-473b-9f9b-edd43281b328","Type":"ContainerStarted","Data":"800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253"} Mar 14 09:18:04 crc kubenswrapper[4869]: I0314 09:18:04.314794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcj56" event={"ID":"33e29751-96de-4f9a-9756-6bde3535c6ee","Type":"ContainerStarted","Data":"00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607"} Mar 14 09:18:04 crc kubenswrapper[4869]: I0314 09:18:04.315967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d9nfj" event={"ID":"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4","Type":"ContainerStarted","Data":"55017d36d8b7d5e8bbdd138d99f97c7617ca7cb1b942e0866e4be845f72c0e31"} Mar 14 09:18:05 crc kubenswrapper[4869]: I0314 09:18:05.884912 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.020004 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-j8jwz"] Mar 14 09:18:06 crc kubenswrapper[4869]: E0314 09:18:06.021650 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.021667 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: E0314 09:18:06.021681 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.021687 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: E0314 09:18:06.021695 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e55491-9c61-49eb-84fb-38ada8084c67" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.021701 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e55491-9c61-49eb-84fb-38ada8084c67" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: E0314 09:18:06.021722 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d454e5c2-25a0-4404-9447-10485e0adea0" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.021728 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d454e5c2-25a0-4404-9447-10485e0adea0" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: E0314 09:18:06.021739 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012663ea-c91c-4157-b2e3-a11a65a9a6d1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.021746 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="012663ea-c91c-4157-b2e3-a11a65a9a6d1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022071 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022087 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e55491-9c61-49eb-84fb-38ada8084c67" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022100 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d454e5c2-25a0-4404-9447-10485e0adea0" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022110 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" containerName="mariadb-account-create-update" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="012663ea-c91c-4157-b2e3-a11a65a9a6d1" containerName="mariadb-database-create" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.022871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.027200 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.027426 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghmv7" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.041448 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j8jwz"] Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.165223 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.165274 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.165327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsr2b\" (UniqueName: \"kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.165485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.267866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsr2b\" (UniqueName: \"kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.267956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.269156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.269209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.275131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.275467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.279579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.289039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsr2b\" (UniqueName: \"kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b\") pod \"glance-db-sync-j8jwz\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:06 crc kubenswrapper[4869]: I0314 09:18:06.350040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:08 crc kubenswrapper[4869]: I0314 09:18:08.844246 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4q5tx"] Mar 14 09:18:08 crc kubenswrapper[4869]: I0314 09:18:08.852201 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4q5tx"] Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.267238 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d454e5c2-25a0-4404-9447-10485e0adea0" (UID: "d454e5c2-25a0-4404-9447-10485e0adea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.350162 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d454e5c2-25a0-4404-9447-10485e0adea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.402402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"38c3b4a0-0639-4d3b-ae4f-3e272522326f","Type":"ContainerStarted","Data":"79489cfbc067f6d7dd349dd64423d40a769cb62523b557c9e2e7599c8f95b25d"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.410723 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d9nfj" event={"ID":"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4","Type":"ContainerStarted","Data":"da2a11514e60d1c5cf9ee9f12bc072d4b5591d007f769a86f6cd1dbbb3ef87a8"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.417546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69f0-account-create-update-pgvcz" event={"ID":"d9c24332-9232-4665-a910-640c344ea424","Type":"ContainerStarted","Data":"0f26a3e5e60ee458256bbcf1ad03f7a873517ed74e9d67de3757d0dc94020638"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.421483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d6d5-account-create-update-rd5gj" event={"ID":"cb9d5689-4433-473b-9f9b-edd43281b328","Type":"ContainerStarted","Data":"76911c237eb8e3cfa6942fb95a9b77f03de514d6ebcdf85b0a7d157bc1a4bfb1"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.437298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9735b30c-8379-4478-9460-51882d519d32","Type":"ContainerStarted","Data":"c115c42638149947a1f38851dc43286d2865ec89f20d97a20c9f15dcb69e9f11"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.439232 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.465303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"da13efd4-046a-4059-9b04-b731f2d164b5","Type":"ContainerStarted","Data":"6fb31c7fcce8d64f14da888f74032f101fdce2bddbd0d30e75fa11be03ef1ea0"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.466237 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.470463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcj56" event={"ID":"33e29751-96de-4f9a-9756-6bde3535c6ee","Type":"ContainerStarted","Data":"d9a23c61653d81cff146499b3d2649592eac0bd90a5ddb742cbadb816a70c04e"} Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.496043 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-d9nfj" podStartSLOduration=9.496027594 podStartE2EDuration="9.496027594s" podCreationTimestamp="2026-03-14 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:10.469176632 +0000 UTC m=+1243.441458685" watchObservedRunningTime="2026-03-14 09:18:10.496027594 +0000 UTC m=+1243.468309647" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.501809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j8jwz"] Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.547621 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.560114 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69f0-account-create-update-pgvcz" podStartSLOduration=9.560091555 podStartE2EDuration="9.560091555s" podCreationTimestamp="2026-03-14 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:10.5542127 +0000 UTC m=+1243.526494763" watchObservedRunningTime="2026-03-14 09:18:10.560091555 +0000 UTC m=+1243.532373608" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.562138 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=63.870855097 podStartE2EDuration="1m5.562124715s" podCreationTimestamp="2026-03-14 09:17:05 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.111271126 +0000 UTC m=+1198.083553209" lastFinishedPulling="2026-03-14 09:17:26.802540774 +0000 UTC m=+1199.774822827" observedRunningTime="2026-03-14 09:18:10.522253181 +0000 UTC m=+1243.494535254" watchObservedRunningTime="2026-03-14 09:18:10.562124715 +0000 UTC m=+1243.534406768" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.589769 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=60.26772071 podStartE2EDuration="1m5.589752217s" podCreationTimestamp="2026-03-14 09:17:05 +0000 UTC" firstStartedPulling="2026-03-14 09:17:21.475591556 +0000 UTC m=+1194.447873619" lastFinishedPulling="2026-03-14 09:17:26.797623073 +0000 UTC m=+1199.769905126" observedRunningTime="2026-03-14 09:18:10.589365557 +0000 UTC m=+1243.561647610" watchObservedRunningTime="2026-03-14 09:18:10.589752217 +0000 UTC m=+1243.562034270" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.606871 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29557998-725qn" podStartSLOduration=9.803509515 podStartE2EDuration="10.606840148s" podCreationTimestamp="2026-03-14 09:18:00 +0000 UTC" firstStartedPulling="2026-03-14 09:18:01.166685839 +0000 UTC m=+1234.138967892" lastFinishedPulling="2026-03-14 09:18:01.970016472 +0000 UTC m=+1234.942298525" observedRunningTime="2026-03-14 09:18:10.605831724 +0000 UTC m=+1243.578113797" watchObservedRunningTime="2026-03-14 09:18:10.606840148 +0000 UTC m=+1243.579122201" Mar 14 09:18:10 crc kubenswrapper[4869]: I0314 09:18:10.631564 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rcj56" podStartSLOduration=9.631542568 podStartE2EDuration="9.631542568s" podCreationTimestamp="2026-03-14 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:10.624342631 +0000 UTC m=+1243.596624694" watchObservedRunningTime="2026-03-14 09:18:10.631542568 +0000 UTC m=+1243.603824651" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.246139 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vznj2" podUID="e8735cd0-7d17-4b28-b5fb-99219798ee6f" containerName="ovn-controller" probeResult="failure" output=< Mar 14 09:18:11 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 14 09:18:11 crc kubenswrapper[4869]: > Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.264001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.271590 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rllnb" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.498853 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vznj2-config-fj4lp"] Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.502711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8jwz" event={"ID":"79e8c4d6-376f-4130-8057-06519abb646a","Type":"ContainerStarted","Data":"5bb5b5c09cb6b2eb9e181432ce61446828f5c2893c30e5b9c01f7b447f1456f4"} Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.502843 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.507880 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.510352 4869 generic.go:334] "Generic (PLEG): container finished" podID="d22c0c3e-3573-40f6-8bd9-000533db9955" containerID="195a69bdc40b87a3eccdc21bd245b09941a49b215cd821f013b05906852a42dd" exitCode=0 Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.511190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-725qn" event={"ID":"d22c0c3e-3573-40f6-8bd9-000533db9955","Type":"ContainerDied","Data":"195a69bdc40b87a3eccdc21bd245b09941a49b215cd821f013b05906852a42dd"} Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.511335 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.542661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vznj2-config-fj4lp"] Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.558053 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d6d5-account-create-update-rd5gj" podStartSLOduration=9.55803304 podStartE2EDuration="9.55803304s" podCreationTimestamp="2026-03-14 09:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:11.548604387 +0000 UTC m=+1244.520886450" watchObservedRunningTime="2026-03-14 09:18:11.55803304 +0000 UTC m=+1244.530315083" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49grb\" (UniqueName: \"kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.574913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.596459 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=63.756067439 podStartE2EDuration="1m6.596445037s" podCreationTimestamp="2026-03-14 09:17:05 +0000 UTC" firstStartedPulling="2026-03-14 09:17:24.064528223 +0000 UTC m=+1197.036810276" lastFinishedPulling="2026-03-14 09:17:26.904905821 +0000 UTC m=+1199.877187874" observedRunningTime="2026-03-14 09:18:11.59130749 +0000 UTC m=+1244.563589563" watchObservedRunningTime="2026-03-14 09:18:11.596445037 +0000 UTC m=+1244.568727090" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49grb\" (UniqueName: \"kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.676694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.677307 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.677379 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.677918 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.679254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.714069 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49grb\" (UniqueName: \"kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb\") pod \"ovn-controller-vznj2-config-fj4lp\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.719606 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d454e5c2-25a0-4404-9447-10485e0adea0" path="/var/lib/kubelet/pods/d454e5c2-25a0-4404-9447-10485e0adea0/volumes" Mar 14 09:18:11 crc kubenswrapper[4869]: I0314 09:18:11.828331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.380705 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vznj2-config-fj4lp"] Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.521587 4869 generic.go:334] "Generic (PLEG): container finished" podID="cb9d5689-4433-473b-9f9b-edd43281b328" containerID="76911c237eb8e3cfa6942fb95a9b77f03de514d6ebcdf85b0a7d157bc1a4bfb1" exitCode=0 Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.521662 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d6d5-account-create-update-rd5gj" event={"ID":"cb9d5689-4433-473b-9f9b-edd43281b328","Type":"ContainerDied","Data":"76911c237eb8e3cfa6942fb95a9b77f03de514d6ebcdf85b0a7d157bc1a4bfb1"} Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.523672 4869 generic.go:334] "Generic (PLEG): container finished" podID="33e29751-96de-4f9a-9756-6bde3535c6ee" containerID="d9a23c61653d81cff146499b3d2649592eac0bd90a5ddb742cbadb816a70c04e" exitCode=0 Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.523733 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcj56" event={"ID":"33e29751-96de-4f9a-9756-6bde3535c6ee","Type":"ContainerDied","Data":"d9a23c61653d81cff146499b3d2649592eac0bd90a5ddb742cbadb816a70c04e"} Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.525095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2-config-fj4lp" event={"ID":"aa0947ea-b2ac-48d6-91d0-ce4d21948347","Type":"ContainerStarted","Data":"4aecada31512430b2cb4baee188f4285a432c651cc3a475c01d5a93ee5e816e2"} Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.526692 4869 generic.go:334] "Generic (PLEG): container finished" podID="3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" containerID="da2a11514e60d1c5cf9ee9f12bc072d4b5591d007f769a86f6cd1dbbb3ef87a8" exitCode=0 Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.526731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d9nfj" event={"ID":"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4","Type":"ContainerDied","Data":"da2a11514e60d1c5cf9ee9f12bc072d4b5591d007f769a86f6cd1dbbb3ef87a8"} Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.528142 4869 generic.go:334] "Generic (PLEG): container finished" podID="d9c24332-9232-4665-a910-640c344ea424" containerID="0f26a3e5e60ee458256bbcf1ad03f7a873517ed74e9d67de3757d0dc94020638" exitCode=0 Mar 14 09:18:12 crc kubenswrapper[4869]: I0314 09:18:12.528227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69f0-account-create-update-pgvcz" event={"ID":"d9c24332-9232-4665-a910-640c344ea424","Type":"ContainerDied","Data":"0f26a3e5e60ee458256bbcf1ad03f7a873517ed74e9d67de3757d0dc94020638"} Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.783723 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.874366 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4l4rv"] Mar 14 09:18:13 crc kubenswrapper[4869]: E0314 09:18:13.874831 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22c0c3e-3573-40f6-8bd9-000533db9955" containerName="oc" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.874875 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22c0c3e-3573-40f6-8bd9-000533db9955" containerName="oc" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.875087 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22c0c3e-3573-40f6-8bd9-000533db9955" containerName="oc" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.875796 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.878446 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.896319 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4l4rv"] Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.942633 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdgz\" (UniqueName: \"kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz\") pod \"d22c0c3e-3573-40f6-8bd9-000533db9955\" (UID: \"d22c0c3e-3573-40f6-8bd9-000533db9955\") " Mar 14 09:18:13 crc kubenswrapper[4869]: I0314 09:18:13.959174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz" (OuterVolumeSpecName: "kube-api-access-rzdgz") pod "d22c0c3e-3573-40f6-8bd9-000533db9955" (UID: "d22c0c3e-3573-40f6-8bd9-000533db9955"). InnerVolumeSpecName "kube-api-access-rzdgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.039838 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.043909 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.043970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8hmw\" (UniqueName: \"kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.044214 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdgz\" (UniqueName: \"kubernetes.io/projected/d22c0c3e-3573-40f6-8bd9-000533db9955-kube-api-access-rzdgz\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.145762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts\") pod \"cb9d5689-4433-473b-9f9b-edd43281b328\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.145950 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7sn\" (UniqueName: \"kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn\") pod \"cb9d5689-4433-473b-9f9b-edd43281b328\" (UID: \"cb9d5689-4433-473b-9f9b-edd43281b328\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.146157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8hmw\" (UniqueName: \"kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.146546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.147137 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb9d5689-4433-473b-9f9b-edd43281b328" (UID: "cb9d5689-4433-473b-9f9b-edd43281b328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.148857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.151748 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn" (OuterVolumeSpecName: "kube-api-access-7n7sn") pod "cb9d5689-4433-473b-9f9b-edd43281b328" (UID: "cb9d5689-4433-473b-9f9b-edd43281b328"). InnerVolumeSpecName "kube-api-access-7n7sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.167925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8hmw\" (UniqueName: \"kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw\") pod \"root-account-create-update-4l4rv\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.168908 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.206822 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.219663 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.248385 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7sn\" (UniqueName: \"kubernetes.io/projected/cb9d5689-4433-473b-9f9b-edd43281b328-kube-api-access-7n7sn\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.248414 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9d5689-4433-473b-9f9b-edd43281b328-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.350762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts\") pod \"d9c24332-9232-4665-a910-640c344ea424\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.351150 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9c24332-9232-4665-a910-640c344ea424" (UID: "d9c24332-9232-4665-a910-640c344ea424"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.351392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgf5b\" (UniqueName: \"kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b\") pod \"d9c24332-9232-4665-a910-640c344ea424\" (UID: \"d9c24332-9232-4665-a910-640c344ea424\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.351537 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p69pt\" (UniqueName: \"kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt\") pod \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.351746 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts\") pod \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\" (UID: \"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.352109 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" (UID: "3a191a24-a73d-4f29-b9b4-94ad8d78b4f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.352501 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.352912 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9c24332-9232-4665-a910-640c344ea424-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.356724 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b" (OuterVolumeSpecName: "kube-api-access-xgf5b") pod "d9c24332-9232-4665-a910-640c344ea424" (UID: "d9c24332-9232-4665-a910-640c344ea424"). InnerVolumeSpecName "kube-api-access-xgf5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.356798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt" (OuterVolumeSpecName: "kube-api-access-p69pt") pod "3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" (UID: "3a191a24-a73d-4f29-b9b4-94ad8d78b4f4"). InnerVolumeSpecName "kube-api-access-p69pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.455017 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgf5b\" (UniqueName: \"kubernetes.io/projected/d9c24332-9232-4665-a910-640c344ea424-kube-api-access-xgf5b\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.455060 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p69pt\" (UniqueName: \"kubernetes.io/projected/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4-kube-api-access-p69pt\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.606680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69f0-account-create-update-pgvcz" event={"ID":"d9c24332-9232-4665-a910-640c344ea424","Type":"ContainerDied","Data":"4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5"} Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.606712 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69f0-account-create-update-pgvcz" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.606728 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df48c3a91ed7d8e3873a4339b4d6043585884fb6181e8607a3c2b089211cbe5" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.609794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d6d5-account-create-update-rd5gj" event={"ID":"cb9d5689-4433-473b-9f9b-edd43281b328","Type":"ContainerDied","Data":"800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253"} Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.609841 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800a824ee9ec2681c0d9ceefa137596dfb16c01bf5963fa79b6c4b5840a15253" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.610023 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d6d5-account-create-update-rd5gj" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.613096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29557998-725qn" event={"ID":"d22c0c3e-3573-40f6-8bd9-000533db9955","Type":"ContainerDied","Data":"7435b4ff9af89fc8dc18ab470ca5df605ed86768f5796416e2669d41ab868ba2"} Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.613132 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29557998-725qn" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.613133 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7435b4ff9af89fc8dc18ab470ca5df605ed86768f5796416e2669d41ab868ba2" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.615150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2-config-fj4lp" event={"ID":"aa0947ea-b2ac-48d6-91d0-ce4d21948347","Type":"ContainerStarted","Data":"cef5c259787334b288452ddfe57cea9ecaea2a29a7759fd3eea25dda860fb5fd"} Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.616676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d9nfj" event={"ID":"3a191a24-a73d-4f29-b9b4-94ad8d78b4f4","Type":"ContainerDied","Data":"55017d36d8b7d5e8bbdd138d99f97c7617ca7cb1b942e0866e4be845f72c0e31"} Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.616717 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55017d36d8b7d5e8bbdd138d99f97c7617ca7cb1b942e0866e4be845f72c0e31" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.616718 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d9nfj" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.640696 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vznj2-config-fj4lp" podStartSLOduration=3.640674714 podStartE2EDuration="3.640674714s" podCreationTimestamp="2026-03-14 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:14.636165303 +0000 UTC m=+1247.608447376" watchObservedRunningTime="2026-03-14 09:18:14.640674714 +0000 UTC m=+1247.612956767" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.849803 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.878440 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-qsslm"] Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.890159 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557992-qsslm"] Mar 14 09:18:14 crc kubenswrapper[4869]: E0314 09:18:14.917836 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c24332_9232_4665_a910_640c344ea424.slice\": RecentStats: unable to find data in memory cache]" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.918969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wxlt\" (UniqueName: \"kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt\") pod \"33e29751-96de-4f9a-9756-6bde3535c6ee\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.919044 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts\") pod \"33e29751-96de-4f9a-9756-6bde3535c6ee\" (UID: \"33e29751-96de-4f9a-9756-6bde3535c6ee\") " Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.920475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33e29751-96de-4f9a-9756-6bde3535c6ee" (UID: "33e29751-96de-4f9a-9756-6bde3535c6ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:14 crc kubenswrapper[4869]: I0314 09:18:14.939720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt" (OuterVolumeSpecName: "kube-api-access-2wxlt") pod "33e29751-96de-4f9a-9756-6bde3535c6ee" (UID: "33e29751-96de-4f9a-9756-6bde3535c6ee"). InnerVolumeSpecName "kube-api-access-2wxlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.021594 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wxlt\" (UniqueName: \"kubernetes.io/projected/33e29751-96de-4f9a-9756-6bde3535c6ee-kube-api-access-2wxlt\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.021627 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e29751-96de-4f9a-9756-6bde3535c6ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.183593 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4l4rv"] Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.628867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rcj56" event={"ID":"33e29751-96de-4f9a-9756-6bde3535c6ee","Type":"ContainerDied","Data":"00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607"} Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.628906 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00debabcb0dcf282f15d6b0781c8ac3ee597c469e1761b3da06e7e9090008607" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.628978 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rcj56" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.637477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerStarted","Data":"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc"} Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.640028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l4rv" event={"ID":"ade47a1c-2503-406e-b29b-d2f0f6976541","Type":"ContainerStarted","Data":"093dd0a2dfb639d50904cc4945bdd277b3dd828c230d0e3e1a3ffe1900031422"} Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.640070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l4rv" event={"ID":"ade47a1c-2503-406e-b29b-d2f0f6976541","Type":"ContainerStarted","Data":"f68be37f7fd86d27f53ccb78e1b7875b627d2b319692936fbaefe2eb90f1411c"} Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.642008 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa0947ea-b2ac-48d6-91d0-ce4d21948347" containerID="cef5c259787334b288452ddfe57cea9ecaea2a29a7759fd3eea25dda860fb5fd" exitCode=0 Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.642049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2-config-fj4lp" event={"ID":"aa0947ea-b2ac-48d6-91d0-ce4d21948347","Type":"ContainerDied","Data":"cef5c259787334b288452ddfe57cea9ecaea2a29a7759fd3eea25dda860fb5fd"} Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.697461 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.139112985 podStartE2EDuration="1m3.697443591s" podCreationTimestamp="2026-03-14 09:17:12 +0000 UTC" firstStartedPulling="2026-03-14 09:17:25.154652833 +0000 UTC m=+1198.126934886" lastFinishedPulling="2026-03-14 09:18:14.712983439 +0000 UTC m=+1247.685265492" observedRunningTime="2026-03-14 09:18:15.680931453 +0000 UTC m=+1248.653213526" watchObservedRunningTime="2026-03-14 09:18:15.697443591 +0000 UTC m=+1248.669725644" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.700046 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4l4rv" podStartSLOduration=2.7000371149999998 podStartE2EDuration="2.700037115s" podCreationTimestamp="2026-03-14 09:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:15.692870688 +0000 UTC m=+1248.665152741" watchObservedRunningTime="2026-03-14 09:18:15.700037115 +0000 UTC m=+1248.672319168" Mar 14 09:18:15 crc kubenswrapper[4869]: I0314 09:18:15.721499 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810d831b-f3a6-498d-b1d1-33dc89ef275c" path="/var/lib/kubelet/pods/810d831b-f3a6-498d-b1d1-33dc89ef275c/volumes" Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.218966 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vznj2" Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.249557 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.257894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8089ea8f-74c0-4fa4-93bd-dc107394a9e5-etc-swift\") pod \"swift-storage-0\" (UID: \"8089ea8f-74c0-4fa4-93bd-dc107394a9e5\") " pod="openstack/swift-storage-0" Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.442122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.666539 4869 generic.go:334] "Generic (PLEG): container finished" podID="ade47a1c-2503-406e-b29b-d2f0f6976541" containerID="093dd0a2dfb639d50904cc4945bdd277b3dd828c230d0e3e1a3ffe1900031422" exitCode=0 Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.666679 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l4rv" event={"ID":"ade47a1c-2503-406e-b29b-d2f0f6976541","Type":"ContainerDied","Data":"093dd0a2dfb639d50904cc4945bdd277b3dd828c230d0e3e1a3ffe1900031422"} Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.669428 4869 generic.go:334] "Generic (PLEG): container finished" podID="1321f800-bd9a-41b6-9bfc-b4f48a644230" containerID="cf7c7b2589e8fad9b7cc7b5b8be75c39ef5bb0d78af7e4ce4ed80b5655a80a56" exitCode=0 Mar 14 09:18:16 crc kubenswrapper[4869]: I0314 09:18:16.669497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ql8h6" event={"ID":"1321f800-bd9a-41b6-9bfc-b4f48a644230","Type":"ContainerDied","Data":"cf7c7b2589e8fad9b7cc7b5b8be75c39ef5bb0d78af7e4ce4ed80b5655a80a56"} Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.053147 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173190 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173333 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173421 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run" (OuterVolumeSpecName: "var-run") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49grb\" (UniqueName: \"kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.173680 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts\") pod \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\" (UID: \"aa0947ea-b2ac-48d6-91d0-ce4d21948347\") " Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.174403 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.174432 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.174446 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aa0947ea-b2ac-48d6-91d0-ce4d21948347-var-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.174456 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.174705 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts" (OuterVolumeSpecName: "scripts") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.196939 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb" (OuterVolumeSpecName: "kube-api-access-49grb") pod "aa0947ea-b2ac-48d6-91d0-ce4d21948347" (UID: "aa0947ea-b2ac-48d6-91d0-ce4d21948347"). InnerVolumeSpecName "kube-api-access-49grb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.203277 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.276999 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49grb\" (UniqueName: \"kubernetes.io/projected/aa0947ea-b2ac-48d6-91d0-ce4d21948347-kube-api-access-49grb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.277035 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.277045 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0947ea-b2ac-48d6-91d0-ce4d21948347-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.692400 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"12993e2a939d64a13d7407c327549954030b331057d3c3b43752d3101a839f02"} Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.701171 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vznj2-config-fj4lp" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.703911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vznj2-config-fj4lp" event={"ID":"aa0947ea-b2ac-48d6-91d0-ce4d21948347","Type":"ContainerDied","Data":"4aecada31512430b2cb4baee188f4285a432c651cc3a475c01d5a93ee5e816e2"} Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.703965 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aecada31512430b2cb4baee188f4285a432c651cc3a475c01d5a93ee5e816e2" Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.759872 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vznj2-config-fj4lp"] Mar 14 09:18:17 crc kubenswrapper[4869]: I0314 09:18:17.774354 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vznj2-config-fj4lp"] Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.352961 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.363044 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.513894 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8hmw\" (UniqueName: \"kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw\") pod \"ade47a1c-2503-406e-b29b-d2f0f6976541\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514165 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts\") pod \"ade47a1c-2503-406e-b29b-d2f0f6976541\" (UID: \"ade47a1c-2503-406e-b29b-d2f0f6976541\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.514550 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nldhr\" (UniqueName: \"kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr\") pod \"1321f800-bd9a-41b6-9bfc-b4f48a644230\" (UID: \"1321f800-bd9a-41b6-9bfc-b4f48a644230\") " Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.517198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ade47a1c-2503-406e-b29b-d2f0f6976541" (UID: "ade47a1c-2503-406e-b29b-d2f0f6976541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.517329 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.518219 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.531110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr" (OuterVolumeSpecName: "kube-api-access-nldhr") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "kube-api-access-nldhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.531539 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw" (OuterVolumeSpecName: "kube-api-access-f8hmw") pod "ade47a1c-2503-406e-b29b-d2f0f6976541" (UID: "ade47a1c-2503-406e-b29b-d2f0f6976541"). InnerVolumeSpecName "kube-api-access-f8hmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.539967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.569388 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts" (OuterVolumeSpecName: "scripts") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616723 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616760 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8hmw\" (UniqueName: \"kubernetes.io/projected/ade47a1c-2503-406e-b29b-d2f0f6976541-kube-api-access-f8hmw\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616774 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade47a1c-2503-406e-b29b-d2f0f6976541-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616785 4869 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616796 4869 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1321f800-bd9a-41b6-9bfc-b4f48a644230-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616807 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nldhr\" (UniqueName: \"kubernetes.io/projected/1321f800-bd9a-41b6-9bfc-b4f48a644230-kube-api-access-nldhr\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.616819 4869 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1321f800-bd9a-41b6-9bfc-b4f48a644230-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.670102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.680617 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1321f800-bd9a-41b6-9bfc-b4f48a644230" (UID: "1321f800-bd9a-41b6-9bfc-b4f48a644230"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.716493 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l4rv" event={"ID":"ade47a1c-2503-406e-b29b-d2f0f6976541","Type":"ContainerDied","Data":"f68be37f7fd86d27f53ccb78e1b7875b627d2b319692936fbaefe2eb90f1411c"} Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.716550 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f68be37f7fd86d27f53ccb78e1b7875b627d2b319692936fbaefe2eb90f1411c" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.716644 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l4rv" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.718346 4869 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.718380 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1321f800-bd9a-41b6-9bfc-b4f48a644230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.718965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ql8h6" event={"ID":"1321f800-bd9a-41b6-9bfc-b4f48a644230","Type":"ContainerDied","Data":"459b7619d364d67e99fafba62b2d91230d00a293f00208c863e617374d877919"} Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.718991 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459b7619d364d67e99fafba62b2d91230d00a293f00208c863e617374d877919" Mar 14 09:18:18 crc kubenswrapper[4869]: I0314 09:18:18.719025 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ql8h6" Mar 14 09:18:19 crc kubenswrapper[4869]: I0314 09:18:19.275340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:19 crc kubenswrapper[4869]: I0314 09:18:19.718812 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0947ea-b2ac-48d6-91d0-ce4d21948347" path="/var/lib/kubelet/pods/aa0947ea-b2ac-48d6-91d0-ce4d21948347/volumes" Mar 14 09:18:19 crc kubenswrapper[4869]: I0314 09:18:19.739898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"135cd01d8f3635de5b3ca28a4180ba5a7d107b3057086176f003dcd8370333db"} Mar 14 09:18:19 crc kubenswrapper[4869]: I0314 09:18:19.739950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"b6a10b1888619e712b90dbd1c23909ee3fffb4c8008c21b14b839c89fb301f78"} Mar 14 09:18:19 crc kubenswrapper[4869]: I0314 09:18:19.739963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"b14fef199899f1f4268942715dcb6bca7e9444d10d06af19b7a67fb8d7b594f4"} Mar 14 09:18:27 crc kubenswrapper[4869]: I0314 09:18:27.035572 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9735b30c-8379-4478-9460-51882d519d32" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Mar 14 09:18:27 crc kubenswrapper[4869]: I0314 09:18:27.075067 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="38c3b4a0-0639-4d3b-ae4f-3e272522326f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Mar 14 09:18:27 crc kubenswrapper[4869]: I0314 09:18:27.332373 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="da13efd4-046a-4059-9b04-b731f2d164b5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Mar 14 09:18:27 crc kubenswrapper[4869]: I0314 09:18:27.814857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"a840939518ebf8aaec4a2a29af8e6caf7501be03efaa89909b1bbc1cc6886814"} Mar 14 09:18:28 crc kubenswrapper[4869]: I0314 09:18:28.823950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8jwz" event={"ID":"79e8c4d6-376f-4130-8057-06519abb646a","Type":"ContainerStarted","Data":"31d7505207d3a4d1a7f3d0d645209173109c78f6ee3b80a9b0bff68706397b16"} Mar 14 09:18:28 crc kubenswrapper[4869]: I0314 09:18:28.828529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"bb03512327d8b00a3715fb101697307bdc366ae69fb3503be3bab4bbae2b8b39"} Mar 14 09:18:28 crc kubenswrapper[4869]: I0314 09:18:28.828566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"a4a64613de761bbd50c205488b4c6438c69ec73659811b109e91bf770d17f384"} Mar 14 09:18:28 crc kubenswrapper[4869]: I0314 09:18:28.848295 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-j8jwz" podStartSLOduration=5.868575853 podStartE2EDuration="22.848271342s" podCreationTimestamp="2026-03-14 09:18:06 +0000 UTC" firstStartedPulling="2026-03-14 09:18:10.547381942 +0000 UTC m=+1243.519663995" lastFinishedPulling="2026-03-14 09:18:27.527077431 +0000 UTC m=+1260.499359484" observedRunningTime="2026-03-14 09:18:28.838977413 +0000 UTC m=+1261.811259476" watchObservedRunningTime="2026-03-14 09:18:28.848271342 +0000 UTC m=+1261.820553405" Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.275337 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.278315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.845701 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"8106edb52488b2a7992eb8d101684b3eb567e74d55d4461f85044cf301b17b26"} Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.846030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"713ef217e52da49b6e7f6b5a0b04611c84953b44ccac99f889039e4834a0b7ed"} Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.846047 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"fdc8e085b95c46683b13445daa48c909b582df17b41e88a6368d38d36565802c"} Mar 14 09:18:29 crc kubenswrapper[4869]: I0314 09:18:29.847365 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:30 crc kubenswrapper[4869]: I0314 09:18:30.861119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"145ffc3016538d81e027d5939902511ef006db62957984ad6b60cbfa2ff19f40"} Mar 14 09:18:30 crc kubenswrapper[4869]: I0314 09:18:30.861494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"0f254163be79ba081e159df14621656f34be46d9b85eb57688beb24fb238adbe"} Mar 14 09:18:30 crc kubenswrapper[4869]: I0314 09:18:30.861525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"2d6dbf8040d61b68c35fb68ec563d7c957d9464804dfa237d5fd5c996b507583"} Mar 14 09:18:30 crc kubenswrapper[4869]: I0314 09:18:30.861538 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"7c6c18a3101a55ea037a7fa3e3b3fa49cb87d27bbcf5994f0786dbffa2ea8fd3"} Mar 14 09:18:31 crc kubenswrapper[4869]: I0314 09:18:31.873674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"edbb9f0af67eaf7536e87e9e8fd1dbfbf30ccd93d330af559349ae1e6601f492"} Mar 14 09:18:31 crc kubenswrapper[4869]: I0314 09:18:31.873976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8089ea8f-74c0-4fa4-93bd-dc107394a9e5","Type":"ContainerStarted","Data":"44fb47bebdf462cca674a5efd95fcadf07a94dec85d92e7e8ffbbb0d2be90a72"} Mar 14 09:18:31 crc kubenswrapper[4869]: I0314 09:18:31.911970 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.638859857 podStartE2EDuration="48.91194485s" podCreationTimestamp="2026-03-14 09:17:43 +0000 UTC" firstStartedPulling="2026-03-14 09:18:17.226571232 +0000 UTC m=+1250.198853285" lastFinishedPulling="2026-03-14 09:18:29.499656225 +0000 UTC m=+1262.471938278" observedRunningTime="2026-03-14 09:18:31.904490415 +0000 UTC m=+1264.876772548" watchObservedRunningTime="2026-03-14 09:18:31.91194485 +0000 UTC m=+1264.884226933" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.217753 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218139 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e29751-96de-4f9a-9756-6bde3535c6ee" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218157 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e29751-96de-4f9a-9756-6bde3535c6ee" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218170 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0947ea-b2ac-48d6-91d0-ce4d21948347" containerName="ovn-config" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218177 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0947ea-b2ac-48d6-91d0-ce4d21948347" containerName="ovn-config" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218199 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9d5689-4433-473b-9f9b-edd43281b328" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218207 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9d5689-4433-473b-9f9b-edd43281b328" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218223 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade47a1c-2503-406e-b29b-d2f0f6976541" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218232 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade47a1c-2503-406e-b29b-d2f0f6976541" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218242 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218261 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1321f800-bd9a-41b6-9bfc-b4f48a644230" containerName="swift-ring-rebalance" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218267 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1321f800-bd9a-41b6-9bfc-b4f48a644230" containerName="swift-ring-rebalance" Mar 14 09:18:32 crc kubenswrapper[4869]: E0314 09:18:32.218277 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c24332-9232-4665-a910-640c344ea424" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218284 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c24332-9232-4665-a910-640c344ea424" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218458 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade47a1c-2503-406e-b29b-d2f0f6976541" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218484 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0947ea-b2ac-48d6-91d0-ce4d21948347" containerName="ovn-config" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218493 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e29751-96de-4f9a-9756-6bde3535c6ee" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218531 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c24332-9232-4665-a910-640c344ea424" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218543 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" containerName="mariadb-database-create" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218574 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9d5689-4433-473b-9f9b-edd43281b328" containerName="mariadb-account-create-update" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.218585 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1321f800-bd9a-41b6-9bfc-b4f48a644230" containerName="swift-ring-rebalance" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.219775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.223045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.231713 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373410 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9km\" (UniqueName: \"kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.373561 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.451917 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.452185 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="prometheus" containerID="cri-o://be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b" gracePeriod=600 Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.452316 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="thanos-sidecar" containerID="cri-o://73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc" gracePeriod=600 Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.452330 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="config-reloader" containerID="cri-o://b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b" gracePeriod=600 Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475498 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.475639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9km\" (UniqueName: \"kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.476486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.476632 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.476645 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.476673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.476785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.508966 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9km\" (UniqueName: \"kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km\") pod \"dnsmasq-dns-6d575b8c75-7rrrn\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:32 crc kubenswrapper[4869]: I0314 09:18:32.542034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:32.886005 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerID="73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc" exitCode=0 Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:32.886458 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerID="be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b" exitCode=0 Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:32.886132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerDied","Data":"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:32.886547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerDied","Data":"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.457360 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.675741 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.824372 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.824882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825085 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825251 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825310 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8nzc\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825425 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825492 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config\") pod \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\" (UID: \"2d3fa9ff-c502-4d77-8465-e60dea09a3a0\") " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.825812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.826129 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.826488 4869 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.826538 4869 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.826552 4869 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.829458 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out" (OuterVolumeSpecName: "config-out") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.829655 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.830051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config" (OuterVolumeSpecName: "config") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.832251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc" (OuterVolumeSpecName: "kube-api-access-b8nzc") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "kube-api-access-b8nzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.833431 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.845096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "pvc-7638610f-3e54-4977-bd1e-0acda512cdb5". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.852453 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config" (OuterVolumeSpecName: "web-config") pod "2d3fa9ff-c502-4d77-8465-e60dea09a3a0" (UID: "2d3fa9ff-c502-4d77-8465-e60dea09a3a0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.902618 4869 generic.go:334] "Generic (PLEG): container finished" podID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerID="7b95961f3686a9519b7f31bf1e2ec2841495aa8d649ecb04f563871eae09d699" exitCode=0 Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.902715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" event={"ID":"e038eec8-d039-4436-a9af-3bd09cb8479f","Type":"ContainerDied","Data":"7b95961f3686a9519b7f31bf1e2ec2841495aa8d649ecb04f563871eae09d699"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.902754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" event={"ID":"e038eec8-d039-4436-a9af-3bd09cb8479f","Type":"ContainerStarted","Data":"6effe8a3d289e6d6015458b90c01cfca4298f7f76b2c6ddaf73790cee2c94724"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.928026 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerID="b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b" exitCode=0 Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.928303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerDied","Data":"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.928338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d3fa9ff-c502-4d77-8465-e60dea09a3a0","Type":"ContainerDied","Data":"f40ee6b8b945d5e36ceae0969a8024db6c50921dde37b8e229aec70b6190b198"} Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.928603 4869 scope.go:117] "RemoveContainer" containerID="73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.929105 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.937932 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.937990 4869 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.938006 4869 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.938021 4869 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-config-out\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.938051 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") on node \"crc\" " Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.938062 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8nzc\" (UniqueName: \"kubernetes.io/projected/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-kube-api-access-b8nzc\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.938075 4869 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d3fa9ff-c502-4d77-8465-e60dea09a3a0-web-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.980956 4869 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.981073 4869 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7638610f-3e54-4977-bd1e-0acda512cdb5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5") on node "crc" Mar 14 09:18:33 crc kubenswrapper[4869]: I0314 09:18:33.994803 4869 scope.go:117] "RemoveContainer" containerID="b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.015769 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.032567 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.039561 4869 reconciler_common.go:293] "Volume detached for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.044989 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.045329 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="thanos-sidecar" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045342 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="thanos-sidecar" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.045368 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="prometheus" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045375 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="prometheus" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.045386 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="config-reloader" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045392 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="config-reloader" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.045411 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="init-config-reloader" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045417 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="init-config-reloader" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045586 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="config-reloader" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045607 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="thanos-sidecar" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.045622 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" containerName="prometheus" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.047241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.051369 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.051664 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-kdbf4" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.051875 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.052169 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.052403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.052696 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.052783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.052974 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.053010 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.060140 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.071556 4869 scope.go:117] "RemoveContainer" containerID="be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.102490 4869 scope.go:117] "RemoveContainer" containerID="b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.128975 4869 scope.go:117] "RemoveContainer" containerID="73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.129344 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc\": container with ID starting with 73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc not found: ID does not exist" containerID="73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.129372 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc"} err="failed to get container status \"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc\": rpc error: code = NotFound desc = could not find container \"73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc\": container with ID starting with 73d444e9177a1614229506820e512ba5531db85079fc2ec05e9bd0a7266521cc not found: ID does not exist" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.129393 4869 scope.go:117] "RemoveContainer" containerID="b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.129588 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b\": container with ID starting with b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b not found: ID does not exist" containerID="b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.129609 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b"} err="failed to get container status \"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b\": rpc error: code = NotFound desc = could not find container \"b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b\": container with ID starting with b9622f178d3d0520b907742a24f65ab460cc7ebfbb0bcb99fcc3d4d08e1e113b not found: ID does not exist" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.129627 4869 scope.go:117] "RemoveContainer" containerID="be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.131652 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b\": container with ID starting with be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b not found: ID does not exist" containerID="be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.131675 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b"} err="failed to get container status \"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b\": rpc error: code = NotFound desc = could not find container \"be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b\": container with ID starting with be677c287436d7ac69c8008c8666cb24a0f3782ab225549e515849b80378297b not found: ID does not exist" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.131687 4869 scope.go:117] "RemoveContainer" containerID="b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065" Mar 14 09:18:34 crc kubenswrapper[4869]: E0314 09:18:34.132053 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065\": container with ID starting with b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065 not found: ID does not exist" containerID="b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.132072 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065"} err="failed to get container status \"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065\": rpc error: code = NotFound desc = could not find container \"b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065\": container with ID starting with b4186e5419f113c4a03ba09beed9469c89443d9ce9f8c080a0c77b28414ba065 not found: ID does not exist" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.242914 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.242973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/310e10f6-6126-4199-bc3f-e386680b8acb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243464 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cprlc\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-kube-api-access-cprlc\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.243718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cprlc\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-kube-api-access-cprlc\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345652 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345788 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/310e10f6-6126-4199-bc3f-e386680b8acb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345924 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.345995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.346462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.346566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.350413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.350970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.351470 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/310e10f6-6126-4199-bc3f-e386680b8acb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.351605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.351911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.353258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.353614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/310e10f6-6126-4199-bc3f-e386680b8acb-config\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.353738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.354865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/310e10f6-6126-4199-bc3f-e386680b8acb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.362452 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.362503 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b23c7cf67fc615d04f0e059180fc33c3eccc3627e9974587af79149c424358e3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.368586 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cprlc\" (UniqueName: \"kubernetes.io/projected/310e10f6-6126-4199-bc3f-e386680b8acb-kube-api-access-cprlc\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.395703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7638610f-3e54-4977-bd1e-0acda512cdb5\") pod \"prometheus-metric-storage-0\" (UID: \"310e10f6-6126-4199-bc3f-e386680b8acb\") " pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.676591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.937196 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" event={"ID":"e038eec8-d039-4436-a9af-3bd09cb8479f","Type":"ContainerStarted","Data":"f2bf2532f97ec5b3b461c4d766cd6a25c9213301533ceb44a132269f5074bd24"} Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.937566 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:34 crc kubenswrapper[4869]: I0314 09:18:34.962456 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" podStartSLOduration=2.962437441 podStartE2EDuration="2.962437441s" podCreationTimestamp="2026-03-14 09:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:34.954841584 +0000 UTC m=+1267.927123637" watchObservedRunningTime="2026-03-14 09:18:34.962437441 +0000 UTC m=+1267.934719494" Mar 14 09:18:35 crc kubenswrapper[4869]: I0314 09:18:35.119727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 14 09:18:35 crc kubenswrapper[4869]: I0314 09:18:35.715175 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d3fa9ff-c502-4d77-8465-e60dea09a3a0" path="/var/lib/kubelet/pods/2d3fa9ff-c502-4d77-8465-e60dea09a3a0/volumes" Mar 14 09:18:35 crc kubenswrapper[4869]: I0314 09:18:35.948062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerStarted","Data":"f7319efa0b7e1eb49f3d7ae86280422810e76f4951890dcfaa34bd0df4a92db1"} Mar 14 09:18:37 crc kubenswrapper[4869]: I0314 09:18:37.036832 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 14 09:18:37 crc kubenswrapper[4869]: I0314 09:18:37.072729 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 14 09:18:37 crc kubenswrapper[4869]: I0314 09:18:37.331700 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Mar 14 09:18:37 crc kubenswrapper[4869]: I0314 09:18:37.964760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerStarted","Data":"a54b29e4e7a577c933f004611584100471c5db996da6c3d96d25910e249bbceb"} Mar 14 09:18:38 crc kubenswrapper[4869]: I0314 09:18:38.951745 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4b2jl"] Mar 14 09:18:38 crc kubenswrapper[4869]: I0314 09:18:38.953424 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:38.975915 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4b2jl"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.049249 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pdmtz"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.064292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc8l\" (UniqueName: \"kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.064361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.065695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.084643 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pdmtz"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.118343 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2a10-account-create-update-fkpqt"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.119743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.123291 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.143381 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2a10-account-create-update-fkpqt"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.165901 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.166034 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.166168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc8l\" (UniqueName: \"kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.166203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bb4\" (UniqueName: \"kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.166919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.170474 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-w9p7x"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.171682 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.175036 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.175222 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.175345 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk6zl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.175467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.183324 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w9p7x"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.200950 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-050d-account-create-update-hfstd"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.202116 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.208542 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-050d-account-create-update-hfstd"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.209976 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.220416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc8l\" (UniqueName: \"kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l\") pod \"barbican-db-create-4b2jl\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267350 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6k4\" (UniqueName: \"kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7bb4\" (UniqueName: \"kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxw85\" (UniqueName: \"kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267561 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.267686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5kv\" (UniqueName: \"kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.268365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.302176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7bb4\" (UniqueName: \"kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4\") pod \"cinder-db-create-pdmtz\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.318205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.369816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6k4\" (UniqueName: \"kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.369874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.369906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxw85\" (UniqueName: \"kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.369966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.369989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.370036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.370077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x5kv\" (UniqueName: \"kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.371093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.371145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.376301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.377289 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.390911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x5kv\" (UniqueName: \"kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv\") pod \"cinder-2a10-account-create-update-fkpqt\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.394118 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6k4\" (UniqueName: \"kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4\") pod \"keystone-db-sync-w9p7x\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.394118 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxw85\" (UniqueName: \"kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85\") pod \"barbican-050d-account-create-update-hfstd\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.406029 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.445165 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.492407 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.572657 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.831634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4b2jl"] Mar 14 09:18:39 crc kubenswrapper[4869]: W0314 09:18:39.834643 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c6ece88_630b_4388_a67a_7356b8f3812e.slice/crio-f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45 WatchSource:0}: Error finding container f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45: Status 404 returned error can't find the container with id f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45 Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.980912 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pdmtz"] Mar 14 09:18:39 crc kubenswrapper[4869]: I0314 09:18:39.992659 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4b2jl" event={"ID":"4c6ece88-630b-4388-a67a-7356b8f3812e","Type":"ContainerStarted","Data":"f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45"} Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.079844 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2a10-account-create-update-fkpqt"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.089078 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w9p7x"] Mar 14 09:18:40 crc kubenswrapper[4869]: W0314 09:18:40.089302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c4d107b_ac87_494b_8aeb_83d4488e934c.slice/crio-d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848 WatchSource:0}: Error finding container d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848: Status 404 returned error can't find the container with id d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848 Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.233296 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-050d-account-create-update-hfstd"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.536143 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-2qcj2"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.538400 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.540900 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.542957 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-shpxz" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.544963 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-2qcj2"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.596327 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xvbcd"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.602160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.614775 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xvbcd"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.695281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.695363 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.695435 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.695471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2r5\" (UniqueName: \"kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.695919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ws2\" (UniqueName: \"kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.696030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.740290 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67a4-account-create-update-xzh65"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.742121 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.744937 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.768199 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67a4-account-create-update-xzh65"] Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.797522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.797750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.797840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.797903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl2r5\" (UniqueName: \"kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.798062 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58ws2\" (UniqueName: \"kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.798134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.800668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.806356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.810708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.814823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.824188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58ws2\" (UniqueName: \"kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2\") pod \"watcher-db-sync-2qcj2\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.828667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl2r5\" (UniqueName: \"kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5\") pod \"neutron-db-create-xvbcd\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.876696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.900849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrn8s\" (UniqueName: \"kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.901861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:40 crc kubenswrapper[4869]: I0314 09:18:40.929862 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.005665 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrn8s\" (UniqueName: \"kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.005807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.006696 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.044150 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrn8s\" (UniqueName: \"kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s\") pod \"neutron-67a4-account-create-update-xzh65\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.057935 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-050d-account-create-update-hfstd" event={"ID":"43206ff4-5f51-4c34-89af-92c875be15a7","Type":"ContainerStarted","Data":"5a3506dae61d6e571f47db99d85098e8806910814b2371c422793b7bd361de35"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.057979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-050d-account-create-update-hfstd" event={"ID":"43206ff4-5f51-4c34-89af-92c875be15a7","Type":"ContainerStarted","Data":"be910b4ad107d322a9f8e4b3420daa5aa1af779e77c173cd6125e376b63aabab"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.064773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.065693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pdmtz" event={"ID":"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2","Type":"ContainerStarted","Data":"291d39c04c580c509b0b2f6589ab2a8a7d721b6dd563b65d94c5483addf518e6"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.065721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pdmtz" event={"ID":"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2","Type":"ContainerStarted","Data":"6f6b9a83f0bffc8b6a0e877b3cf016339929befa586cb6349aee1a079d3b4528"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.075232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w9p7x" event={"ID":"9c4d107b-ac87-494b-8aeb-83d4488e934c","Type":"ContainerStarted","Data":"d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.100568 4869 generic.go:334] "Generic (PLEG): container finished" podID="79e8c4d6-376f-4130-8057-06519abb646a" containerID="31d7505207d3a4d1a7f3d0d645209173109c78f6ee3b80a9b0bff68706397b16" exitCode=0 Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.100629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8jwz" event={"ID":"79e8c4d6-376f-4130-8057-06519abb646a","Type":"ContainerDied","Data":"31d7505207d3a4d1a7f3d0d645209173109c78f6ee3b80a9b0bff68706397b16"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.104947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2a10-account-create-update-fkpqt" event={"ID":"08c3104a-93f8-416c-b18c-c5434a205595","Type":"ContainerStarted","Data":"9d60e6790ad2f360f621c16f107ef758519d061b665b49adc82e4bcd2f372033"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.105228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2a10-account-create-update-fkpqt" event={"ID":"08c3104a-93f8-416c-b18c-c5434a205595","Type":"ContainerStarted","Data":"6f2595cab1500d80466eb61cb4bedc993f64e742155152a2dab1ee2788c22d3e"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.112529 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c6ece88-630b-4388-a67a-7356b8f3812e" containerID="bdf579559ac99043b74e0e6eeeb1462ddc3b6ccbfa7f3072aec013c9018fdc18" exitCode=0 Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.112788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4b2jl" event={"ID":"4c6ece88-630b-4388-a67a-7356b8f3812e","Type":"ContainerDied","Data":"bdf579559ac99043b74e0e6eeeb1462ddc3b6ccbfa7f3072aec013c9018fdc18"} Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.120624 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-050d-account-create-update-hfstd" podStartSLOduration=2.120606697 podStartE2EDuration="2.120606697s" podCreationTimestamp="2026-03-14 09:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:41.093352693 +0000 UTC m=+1274.065634746" watchObservedRunningTime="2026-03-14 09:18:41.120606697 +0000 UTC m=+1274.092888750" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.125047 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-pdmtz" podStartSLOduration=2.125033226 podStartE2EDuration="2.125033226s" podCreationTimestamp="2026-03-14 09:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:41.11627675 +0000 UTC m=+1274.088558793" watchObservedRunningTime="2026-03-14 09:18:41.125033226 +0000 UTC m=+1274.097315279" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.198435 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2a10-account-create-update-fkpqt" podStartSLOduration=2.198413816 podStartE2EDuration="2.198413816s" podCreationTimestamp="2026-03-14 09:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:41.169100032 +0000 UTC m=+1274.141382095" watchObservedRunningTime="2026-03-14 09:18:41.198413816 +0000 UTC m=+1274.170695869" Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.492985 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-2qcj2"] Mar 14 09:18:41 crc kubenswrapper[4869]: W0314 09:18:41.527616 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e1e6856_cc32_474e_8623_48629ef12382.slice/crio-8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a WatchSource:0}: Error finding container 8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a: Status 404 returned error can't find the container with id 8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.629238 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xvbcd"] Mar 14 09:18:41 crc kubenswrapper[4869]: W0314 09:18:41.632334 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17eb787e_6879_4fde_896d_6d22cab6748e.slice/crio-51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076 WatchSource:0}: Error finding container 51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076: Status 404 returned error can't find the container with id 51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076 Mar 14 09:18:41 crc kubenswrapper[4869]: I0314 09:18:41.733136 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67a4-account-create-update-xzh65"] Mar 14 09:18:41 crc kubenswrapper[4869]: W0314 09:18:41.736947 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc088cd02_57e6_4c2b_b2bd_4eede2aa610e.slice/crio-7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e WatchSource:0}: Error finding container 7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e: Status 404 returned error can't find the container with id 7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.125924 4869 generic.go:334] "Generic (PLEG): container finished" podID="08c3104a-93f8-416c-b18c-c5434a205595" containerID="9d60e6790ad2f360f621c16f107ef758519d061b665b49adc82e4bcd2f372033" exitCode=0 Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.126012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2a10-account-create-update-fkpqt" event={"ID":"08c3104a-93f8-416c-b18c-c5434a205595","Type":"ContainerDied","Data":"9d60e6790ad2f360f621c16f107ef758519d061b665b49adc82e4bcd2f372033"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.128341 4869 generic.go:334] "Generic (PLEG): container finished" podID="43206ff4-5f51-4c34-89af-92c875be15a7" containerID="5a3506dae61d6e571f47db99d85098e8806910814b2371c422793b7bd361de35" exitCode=0 Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.128388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-050d-account-create-update-hfstd" event={"ID":"43206ff4-5f51-4c34-89af-92c875be15a7","Type":"ContainerDied","Data":"5a3506dae61d6e571f47db99d85098e8806910814b2371c422793b7bd361de35"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.130237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2qcj2" event={"ID":"3e1e6856-cc32-474e-8623-48629ef12382","Type":"ContainerStarted","Data":"8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.132176 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" containerID="291d39c04c580c509b0b2f6589ab2a8a7d721b6dd563b65d94c5483addf518e6" exitCode=0 Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.132260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pdmtz" event={"ID":"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2","Type":"ContainerDied","Data":"291d39c04c580c509b0b2f6589ab2a8a7d721b6dd563b65d94c5483addf518e6"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.133830 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67a4-account-create-update-xzh65" event={"ID":"c088cd02-57e6-4c2b-b2bd-4eede2aa610e","Type":"ContainerStarted","Data":"5462748d3bfe2dec44fb5f71e626cb824ef6520fcd36b3b733563732132e48af"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.133870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67a4-account-create-update-xzh65" event={"ID":"c088cd02-57e6-4c2b-b2bd-4eede2aa610e","Type":"ContainerStarted","Data":"7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.135409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvbcd" event={"ID":"17eb787e-6879-4fde-896d-6d22cab6748e","Type":"ContainerStarted","Data":"0daa93c3f89f1ea99752bbdcc8fc060c862276ec13718484fb827eac8aa5a5b2"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.135488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvbcd" event={"ID":"17eb787e-6879-4fde-896d-6d22cab6748e","Type":"ContainerStarted","Data":"51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076"} Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.159545 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67a4-account-create-update-xzh65" podStartSLOduration=2.159520952 podStartE2EDuration="2.159520952s" podCreationTimestamp="2026-03-14 09:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:42.155746689 +0000 UTC m=+1275.128028762" watchObservedRunningTime="2026-03-14 09:18:42.159520952 +0000 UTC m=+1275.131803015" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.217728 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-xvbcd" podStartSLOduration=2.217707537 podStartE2EDuration="2.217707537s" podCreationTimestamp="2026-03-14 09:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:42.215890383 +0000 UTC m=+1275.188172436" watchObservedRunningTime="2026-03-14 09:18:42.217707537 +0000 UTC m=+1275.189989610" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.544694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.607143 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.607425 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="dnsmasq-dns" containerID="cri-o://d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a" gracePeriod=10 Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.615308 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.726050 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.758842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc8l\" (UniqueName: \"kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l\") pod \"4c6ece88-630b-4388-a67a-7356b8f3812e\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.758988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts\") pod \"4c6ece88-630b-4388-a67a-7356b8f3812e\" (UID: \"4c6ece88-630b-4388-a67a-7356b8f3812e\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.759540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6ece88-630b-4388-a67a-7356b8f3812e" (UID: "4c6ece88-630b-4388-a67a-7356b8f3812e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.759858 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6ece88-630b-4388-a67a-7356b8f3812e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.765465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l" (OuterVolumeSpecName: "kube-api-access-7lc8l") pod "4c6ece88-630b-4388-a67a-7356b8f3812e" (UID: "4c6ece88-630b-4388-a67a-7356b8f3812e"). InnerVolumeSpecName "kube-api-access-7lc8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.861768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsr2b\" (UniqueName: \"kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b\") pod \"79e8c4d6-376f-4130-8057-06519abb646a\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.861832 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data\") pod \"79e8c4d6-376f-4130-8057-06519abb646a\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.861863 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data\") pod \"79e8c4d6-376f-4130-8057-06519abb646a\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.861942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle\") pod \"79e8c4d6-376f-4130-8057-06519abb646a\" (UID: \"79e8c4d6-376f-4130-8057-06519abb646a\") " Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.862434 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc8l\" (UniqueName: \"kubernetes.io/projected/4c6ece88-630b-4388-a67a-7356b8f3812e-kube-api-access-7lc8l\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.867705 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "79e8c4d6-376f-4130-8057-06519abb646a" (UID: "79e8c4d6-376f-4130-8057-06519abb646a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.868223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b" (OuterVolumeSpecName: "kube-api-access-fsr2b") pod "79e8c4d6-376f-4130-8057-06519abb646a" (UID: "79e8c4d6-376f-4130-8057-06519abb646a"). InnerVolumeSpecName "kube-api-access-fsr2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.907972 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79e8c4d6-376f-4130-8057-06519abb646a" (UID: "79e8c4d6-376f-4130-8057-06519abb646a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.919162 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data" (OuterVolumeSpecName: "config-data") pod "79e8c4d6-376f-4130-8057-06519abb646a" (UID: "79e8c4d6-376f-4130-8057-06519abb646a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.969955 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.970000 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsr2b\" (UniqueName: \"kubernetes.io/projected/79e8c4d6-376f-4130-8057-06519abb646a-kube-api-access-fsr2b\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.970029 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:42 crc kubenswrapper[4869]: I0314 09:18:42.970043 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/79e8c4d6-376f-4130-8057-06519abb646a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.021158 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.145999 4869 generic.go:334] "Generic (PLEG): container finished" podID="c088cd02-57e6-4c2b-b2bd-4eede2aa610e" containerID="5462748d3bfe2dec44fb5f71e626cb824ef6520fcd36b3b733563732132e48af" exitCode=0 Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.146076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67a4-account-create-update-xzh65" event={"ID":"c088cd02-57e6-4c2b-b2bd-4eede2aa610e","Type":"ContainerDied","Data":"5462748d3bfe2dec44fb5f71e626cb824ef6520fcd36b3b733563732132e48af"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.148699 4869 generic.go:334] "Generic (PLEG): container finished" podID="17eb787e-6879-4fde-896d-6d22cab6748e" containerID="0daa93c3f89f1ea99752bbdcc8fc060c862276ec13718484fb827eac8aa5a5b2" exitCode=0 Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.148805 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvbcd" event={"ID":"17eb787e-6879-4fde-896d-6d22cab6748e","Type":"ContainerDied","Data":"0daa93c3f89f1ea99752bbdcc8fc060c862276ec13718484fb827eac8aa5a5b2"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.151608 4869 generic.go:334] "Generic (PLEG): container finished" podID="48035592-e6a5-424f-873e-5bfb77db4f85" containerID="d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a" exitCode=0 Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.151648 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.151700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" event={"ID":"48035592-e6a5-424f-873e-5bfb77db4f85","Type":"ContainerDied","Data":"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.151766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6957795c-zgr2v" event={"ID":"48035592-e6a5-424f-873e-5bfb77db4f85","Type":"ContainerDied","Data":"7a49cfbd8fd3252911a493fa5fa87d3947e60bb0364ca35f55c468b3b890f27e"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.151792 4869 scope.go:117] "RemoveContainer" containerID="d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.157387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j8jwz" event={"ID":"79e8c4d6-376f-4130-8057-06519abb646a","Type":"ContainerDied","Data":"5bb5b5c09cb6b2eb9e181432ce61446828f5c2893c30e5b9c01f7b447f1456f4"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.157418 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bb5b5c09cb6b2eb9e181432ce61446828f5c2893c30e5b9c01f7b447f1456f4" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.157424 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j8jwz" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.168097 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4b2jl" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.171761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4b2jl" event={"ID":"4c6ece88-630b-4388-a67a-7356b8f3812e","Type":"ContainerDied","Data":"f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45"} Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.171793 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5256b4563119901a9193aacc764e00aacf1f9ba63927d3c1855194938852f45" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.172921 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4\") pod \"48035592-e6a5-424f-873e-5bfb77db4f85\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.172955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config\") pod \"48035592-e6a5-424f-873e-5bfb77db4f85\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.173042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb\") pod \"48035592-e6a5-424f-873e-5bfb77db4f85\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.173088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc\") pod \"48035592-e6a5-424f-873e-5bfb77db4f85\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.173107 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb\") pod \"48035592-e6a5-424f-873e-5bfb77db4f85\" (UID: \"48035592-e6a5-424f-873e-5bfb77db4f85\") " Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.320354 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48035592-e6a5-424f-873e-5bfb77db4f85" (UID: "48035592-e6a5-424f-873e-5bfb77db4f85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.325759 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.336419 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4" (OuterVolumeSpecName: "kube-api-access-gtln4") pod "48035592-e6a5-424f-873e-5bfb77db4f85" (UID: "48035592-e6a5-424f-873e-5bfb77db4f85"). InnerVolumeSpecName "kube-api-access-gtln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.405291 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48035592-e6a5-424f-873e-5bfb77db4f85" (UID: "48035592-e6a5-424f-873e-5bfb77db4f85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.430629 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtln4\" (UniqueName: \"kubernetes.io/projected/48035592-e6a5-424f-873e-5bfb77db4f85-kube-api-access-gtln4\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.431614 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.437119 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config" (OuterVolumeSpecName: "config") pod "48035592-e6a5-424f-873e-5bfb77db4f85" (UID: "48035592-e6a5-424f-873e-5bfb77db4f85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.464433 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48035592-e6a5-424f-873e-5bfb77db4f85" (UID: "48035592-e6a5-424f-873e-5bfb77db4f85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.534126 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.534163 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48035592-e6a5-424f-873e-5bfb77db4f85-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.641172 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.651647 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6957795c-zgr2v"] Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.744420 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" path="/var/lib/kubelet/pods/48035592-e6a5-424f-873e-5bfb77db4f85/volumes" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.745378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:18:43 crc kubenswrapper[4869]: E0314 09:18:43.753704 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="init" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.753736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="init" Mar 14 09:18:43 crc kubenswrapper[4869]: E0314 09:18:43.753779 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e8c4d6-376f-4130-8057-06519abb646a" containerName="glance-db-sync" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.753787 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e8c4d6-376f-4130-8057-06519abb646a" containerName="glance-db-sync" Mar 14 09:18:43 crc kubenswrapper[4869]: E0314 09:18:43.753800 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="dnsmasq-dns" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.753806 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="dnsmasq-dns" Mar 14 09:18:43 crc kubenswrapper[4869]: E0314 09:18:43.753829 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6ece88-630b-4388-a67a-7356b8f3812e" containerName="mariadb-database-create" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.753836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6ece88-630b-4388-a67a-7356b8f3812e" containerName="mariadb-database-create" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.754097 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="48035592-e6a5-424f-873e-5bfb77db4f85" containerName="dnsmasq-dns" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.754117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e8c4d6-376f-4130-8057-06519abb646a" containerName="glance-db-sync" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.754126 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6ece88-630b-4388-a67a-7356b8f3812e" containerName="mariadb-database-create" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.755218 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.771065 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849177 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849254 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.849492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvgv\" (UniqueName: \"kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.951834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtvgv\" (UniqueName: \"kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.951951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.952004 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.952040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.952096 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.952143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.953099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.954048 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.954828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.954896 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.955612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:43 crc kubenswrapper[4869]: I0314 09:18:43.970371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtvgv\" (UniqueName: \"kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv\") pod \"dnsmasq-dns-6f6f996c95-j4szb\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:44 crc kubenswrapper[4869]: I0314 09:18:44.104677 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:44 crc kubenswrapper[4869]: I0314 09:18:44.189355 4869 generic.go:334] "Generic (PLEG): container finished" podID="310e10f6-6126-4199-bc3f-e386680b8acb" containerID="a54b29e4e7a577c933f004611584100471c5db996da6c3d96d25910e249bbceb" exitCode=0 Mar 14 09:18:44 crc kubenswrapper[4869]: I0314 09:18:44.189587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerDied","Data":"a54b29e4e7a577c933f004611584100471c5db996da6c3d96d25910e249bbceb"} Mar 14 09:18:48 crc kubenswrapper[4869]: I0314 09:18:48.822242 4869 scope.go:117] "RemoveContainer" containerID="6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.001056 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.012235 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.018742 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.072836 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.104342 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x5kv\" (UniqueName: \"kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv\") pod \"08c3104a-93f8-416c-b18c-c5434a205595\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169669 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts\") pod \"08c3104a-93f8-416c-b18c-c5434a205595\" (UID: \"08c3104a-93f8-416c-b18c-c5434a205595\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts\") pod \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169835 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts\") pod \"43206ff4-5f51-4c34-89af-92c875be15a7\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxw85\" (UniqueName: \"kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85\") pod \"43206ff4-5f51-4c34-89af-92c875be15a7\" (UID: \"43206ff4-5f51-4c34-89af-92c875be15a7\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.169983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrn8s\" (UniqueName: \"kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s\") pod \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\" (UID: \"c088cd02-57e6-4c2b-b2bd-4eede2aa610e\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.170465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c088cd02-57e6-4c2b-b2bd-4eede2aa610e" (UID: "c088cd02-57e6-4c2b-b2bd-4eede2aa610e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.171146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43206ff4-5f51-4c34-89af-92c875be15a7" (UID: "43206ff4-5f51-4c34-89af-92c875be15a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.171165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08c3104a-93f8-416c-b18c-c5434a205595" (UID: "08c3104a-93f8-416c-b18c-c5434a205595"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.174565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85" (OuterVolumeSpecName: "kube-api-access-xxw85") pod "43206ff4-5f51-4c34-89af-92c875be15a7" (UID: "43206ff4-5f51-4c34-89af-92c875be15a7"). InnerVolumeSpecName "kube-api-access-xxw85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.175970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s" (OuterVolumeSpecName: "kube-api-access-rrn8s") pod "c088cd02-57e6-4c2b-b2bd-4eede2aa610e" (UID: "c088cd02-57e6-4c2b-b2bd-4eede2aa610e"). InnerVolumeSpecName "kube-api-access-rrn8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.200002 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv" (OuterVolumeSpecName: "kube-api-access-7x5kv") pod "08c3104a-93f8-416c-b18c-c5434a205595" (UID: "08c3104a-93f8-416c-b18c-c5434a205595"). InnerVolumeSpecName "kube-api-access-7x5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.241628 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xvbcd" event={"ID":"17eb787e-6879-4fde-896d-6d22cab6748e","Type":"ContainerDied","Data":"51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076"} Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.241679 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ab956e859c33b62858d454cce802044906dbef46fd82556e67e2e55fad3076" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.241766 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xvbcd" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.244935 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2a10-account-create-update-fkpqt" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.244954 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2a10-account-create-update-fkpqt" event={"ID":"08c3104a-93f8-416c-b18c-c5434a205595","Type":"ContainerDied","Data":"6f2595cab1500d80466eb61cb4bedc993f64e742155152a2dab1ee2788c22d3e"} Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.245042 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2595cab1500d80466eb61cb4bedc993f64e742155152a2dab1ee2788c22d3e" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.246958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-050d-account-create-update-hfstd" event={"ID":"43206ff4-5f51-4c34-89af-92c875be15a7","Type":"ContainerDied","Data":"be910b4ad107d322a9f8e4b3420daa5aa1af779e77c173cd6125e376b63aabab"} Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.246992 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be910b4ad107d322a9f8e4b3420daa5aa1af779e77c173cd6125e376b63aabab" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.247008 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-050d-account-create-update-hfstd" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.248377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pdmtz" event={"ID":"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2","Type":"ContainerDied","Data":"6f6b9a83f0bffc8b6a0e877b3cf016339929befa586cb6349aee1a079d3b4528"} Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.248402 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f6b9a83f0bffc8b6a0e877b3cf016339929befa586cb6349aee1a079d3b4528" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.248441 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pdmtz" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.250011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67a4-account-create-update-xzh65" event={"ID":"c088cd02-57e6-4c2b-b2bd-4eede2aa610e","Type":"ContainerDied","Data":"7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e"} Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.250034 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6644289e14f987068c139e1a9ee14721d60bf21e1362dcc06752f41510807e" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.250083 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67a4-account-create-update-xzh65" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.271383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts\") pod \"17eb787e-6879-4fde-896d-6d22cab6748e\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.271435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7bb4\" (UniqueName: \"kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4\") pod \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.271635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts\") pod \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\" (UID: \"2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.271765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl2r5\" (UniqueName: \"kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5\") pod \"17eb787e-6879-4fde-896d-6d22cab6748e\" (UID: \"17eb787e-6879-4fde-896d-6d22cab6748e\") " Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.271856 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17eb787e-6879-4fde-896d-6d22cab6748e" (UID: "17eb787e-6879-4fde-896d-6d22cab6748e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272349 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43206ff4-5f51-4c34-89af-92c875be15a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272406 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxw85\" (UniqueName: \"kubernetes.io/projected/43206ff4-5f51-4c34-89af-92c875be15a7-kube-api-access-xxw85\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272422 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrn8s\" (UniqueName: \"kubernetes.io/projected/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-kube-api-access-rrn8s\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272435 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17eb787e-6879-4fde-896d-6d22cab6748e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272449 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x5kv\" (UniqueName: \"kubernetes.io/projected/08c3104a-93f8-416c-b18c-c5434a205595-kube-api-access-7x5kv\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272460 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08c3104a-93f8-416c-b18c-c5434a205595-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272471 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c088cd02-57e6-4c2b-b2bd-4eede2aa610e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.272736 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" (UID: "2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.274778 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4" (OuterVolumeSpecName: "kube-api-access-d7bb4") pod "2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" (UID: "2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2"). InnerVolumeSpecName "kube-api-access-d7bb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.291436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5" (OuterVolumeSpecName: "kube-api-access-nl2r5") pod "17eb787e-6879-4fde-896d-6d22cab6748e" (UID: "17eb787e-6879-4fde-896d-6d22cab6748e"). InnerVolumeSpecName "kube-api-access-nl2r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.373744 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.373783 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl2r5\" (UniqueName: \"kubernetes.io/projected/17eb787e-6879-4fde-896d-6d22cab6748e-kube-api-access-nl2r5\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:49 crc kubenswrapper[4869]: I0314 09:18:49.373797 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7bb4\" (UniqueName: \"kubernetes.io/projected/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2-kube-api-access-d7bb4\") on node \"crc\" DevicePath \"\"" Mar 14 09:18:53 crc kubenswrapper[4869]: I0314 09:18:53.593543 4869 scope.go:117] "RemoveContainer" containerID="d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a" Mar 14 09:18:53 crc kubenswrapper[4869]: E0314 09:18:53.594463 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a\": container with ID starting with d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a not found: ID does not exist" containerID="d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a" Mar 14 09:18:53 crc kubenswrapper[4869]: I0314 09:18:53.594533 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a"} err="failed to get container status \"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a\": rpc error: code = NotFound desc = could not find container \"d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a\": container with ID starting with d8f06e7333aca200f526beb4da2417c25196bb42b6c158f8b66a21f1a2d7998a not found: ID does not exist" Mar 14 09:18:53 crc kubenswrapper[4869]: I0314 09:18:53.594572 4869 scope.go:117] "RemoveContainer" containerID="6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80" Mar 14 09:18:53 crc kubenswrapper[4869]: E0314 09:18:53.594859 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80\": container with ID starting with 6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80 not found: ID does not exist" containerID="6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80" Mar 14 09:18:53 crc kubenswrapper[4869]: I0314 09:18:53.594894 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80"} err="failed to get container status \"6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80\": rpc error: code = NotFound desc = could not find container \"6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80\": container with ID starting with 6e7f3e3683f7656825a5331af53e8f567791fd1d537805d83d8386c4b67d1b80 not found: ID does not exist" Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.092261 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.291997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" event={"ID":"c6b271a0-998e-46d6-863f-ce41b946c67d","Type":"ContainerStarted","Data":"cc05272fa056d4aaf0631c30273b4c5c619a6be58cd35214b2084d778d75eb0e"} Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.293936 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2qcj2" event={"ID":"3e1e6856-cc32-474e-8623-48629ef12382","Type":"ContainerStarted","Data":"86194f5de3036d3e093328210a69d11e4fdddf180f9c600e3c0de4e4d14e9d0f"} Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.298671 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w9p7x" event={"ID":"9c4d107b-ac87-494b-8aeb-83d4488e934c","Type":"ContainerStarted","Data":"da7770e67bcf3029acd2cb3eaf0e1a168134f0c11abda151170a307b68b36548"} Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.303218 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerStarted","Data":"63a2143fcd944cb559d79ac42ba3c5ff388cdbc4ede226c3ec950b72b776d211"} Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.321740 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-2qcj2" podStartSLOduration=2.106469012 podStartE2EDuration="14.321720848s" podCreationTimestamp="2026-03-14 09:18:40 +0000 UTC" firstStartedPulling="2026-03-14 09:18:41.536856727 +0000 UTC m=+1274.509138780" lastFinishedPulling="2026-03-14 09:18:53.752108563 +0000 UTC m=+1286.724390616" observedRunningTime="2026-03-14 09:18:54.314204043 +0000 UTC m=+1287.286486086" watchObservedRunningTime="2026-03-14 09:18:54.321720848 +0000 UTC m=+1287.294002901" Mar 14 09:18:54 crc kubenswrapper[4869]: I0314 09:18:54.337232 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-w9p7x" podStartSLOduration=1.8095892120000001 podStartE2EDuration="15.33721543s" podCreationTimestamp="2026-03-14 09:18:39 +0000 UTC" firstStartedPulling="2026-03-14 09:18:40.093133473 +0000 UTC m=+1273.065415526" lastFinishedPulling="2026-03-14 09:18:53.620759691 +0000 UTC m=+1286.593041744" observedRunningTime="2026-03-14 09:18:54.332351481 +0000 UTC m=+1287.304633554" watchObservedRunningTime="2026-03-14 09:18:54.33721543 +0000 UTC m=+1287.309497483" Mar 14 09:18:55 crc kubenswrapper[4869]: I0314 09:18:55.322442 4869 generic.go:334] "Generic (PLEG): container finished" podID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerID="981d515873fb318397f658065fcc33ac3a8be9c9ca1c79c1bc0ef23fd1eebdc7" exitCode=0 Mar 14 09:18:55 crc kubenswrapper[4869]: I0314 09:18:55.322562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" event={"ID":"c6b271a0-998e-46d6-863f-ce41b946c67d","Type":"ContainerDied","Data":"981d515873fb318397f658065fcc33ac3a8be9c9ca1c79c1bc0ef23fd1eebdc7"} Mar 14 09:18:56 crc kubenswrapper[4869]: I0314 09:18:56.340572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" event={"ID":"c6b271a0-998e-46d6-863f-ce41b946c67d","Type":"ContainerStarted","Data":"68d188360ae68bdc5a6ce55a437b895502dbebf1868cad75560ce9c38f419543"} Mar 14 09:18:56 crc kubenswrapper[4869]: I0314 09:18:56.341855 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:18:57 crc kubenswrapper[4869]: I0314 09:18:57.351547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerStarted","Data":"960305158e3f24d17f4b0b278bedfe913d38c5d235a5414e109898ed2c871809"} Mar 14 09:18:57 crc kubenswrapper[4869]: I0314 09:18:57.351901 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"310e10f6-6126-4199-bc3f-e386680b8acb","Type":"ContainerStarted","Data":"2a53de6d7fcab69240abd8bb34105908b006d0f8ae55a089540db94548a3a924"} Mar 14 09:18:57 crc kubenswrapper[4869]: I0314 09:18:57.383247 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" podStartSLOduration=14.383219962 podStartE2EDuration="14.383219962s" podCreationTimestamp="2026-03-14 09:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:56.37162312 +0000 UTC m=+1289.343905193" watchObservedRunningTime="2026-03-14 09:18:57.383219962 +0000 UTC m=+1290.355502015" Mar 14 09:18:57 crc kubenswrapper[4869]: I0314 09:18:57.390878 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.390777178 podStartE2EDuration="23.390777178s" podCreationTimestamp="2026-03-14 09:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:18:57.377243114 +0000 UTC m=+1290.349525177" watchObservedRunningTime="2026-03-14 09:18:57.390777178 +0000 UTC m=+1290.363059231" Mar 14 09:18:59 crc kubenswrapper[4869]: I0314 09:18:59.676965 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 14 09:19:00 crc kubenswrapper[4869]: I0314 09:19:00.391583 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e1e6856-cc32-474e-8623-48629ef12382" containerID="86194f5de3036d3e093328210a69d11e4fdddf180f9c600e3c0de4e4d14e9d0f" exitCode=0 Mar 14 09:19:00 crc kubenswrapper[4869]: I0314 09:19:00.391643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2qcj2" event={"ID":"3e1e6856-cc32-474e-8623-48629ef12382","Type":"ContainerDied","Data":"86194f5de3036d3e093328210a69d11e4fdddf180f9c600e3c0de4e4d14e9d0f"} Mar 14 09:19:00 crc kubenswrapper[4869]: I0314 09:19:00.776209 4869 scope.go:117] "RemoveContainer" containerID="ee6be54ba3b92dee996fbb9033e564a57c100984441378bcf828afb84a47cf5f" Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.788292 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.919143 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data\") pod \"3e1e6856-cc32-474e-8623-48629ef12382\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.919251 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58ws2\" (UniqueName: \"kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2\") pod \"3e1e6856-cc32-474e-8623-48629ef12382\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.919294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle\") pod \"3e1e6856-cc32-474e-8623-48629ef12382\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.919390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data\") pod \"3e1e6856-cc32-474e-8623-48629ef12382\" (UID: \"3e1e6856-cc32-474e-8623-48629ef12382\") " Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.924446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3e1e6856-cc32-474e-8623-48629ef12382" (UID: "3e1e6856-cc32-474e-8623-48629ef12382"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.927420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2" (OuterVolumeSpecName: "kube-api-access-58ws2") pod "3e1e6856-cc32-474e-8623-48629ef12382" (UID: "3e1e6856-cc32-474e-8623-48629ef12382"). InnerVolumeSpecName "kube-api-access-58ws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.945946 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e1e6856-cc32-474e-8623-48629ef12382" (UID: "3e1e6856-cc32-474e-8623-48629ef12382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:01 crc kubenswrapper[4869]: I0314 09:19:01.966641 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data" (OuterVolumeSpecName: "config-data") pod "3e1e6856-cc32-474e-8623-48629ef12382" (UID: "3e1e6856-cc32-474e-8623-48629ef12382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.021968 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58ws2\" (UniqueName: \"kubernetes.io/projected/3e1e6856-cc32-474e-8623-48629ef12382-kube-api-access-58ws2\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.022015 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.022030 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.022042 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1e6856-cc32-474e-8623-48629ef12382-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.410413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-2qcj2" event={"ID":"3e1e6856-cc32-474e-8623-48629ef12382","Type":"ContainerDied","Data":"8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a"} Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.410569 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e31ceb371511bc6d41c8074a472c21959fe309df51ce6a3886f18fe5786088a" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.410430 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-2qcj2" Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.412699 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c4d107b-ac87-494b-8aeb-83d4488e934c" containerID="da7770e67bcf3029acd2cb3eaf0e1a168134f0c11abda151170a307b68b36548" exitCode=0 Mar 14 09:19:02 crc kubenswrapper[4869]: I0314 09:19:02.412736 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w9p7x" event={"ID":"9c4d107b-ac87-494b-8aeb-83d4488e934c","Type":"ContainerDied","Data":"da7770e67bcf3029acd2cb3eaf0e1a168134f0c11abda151170a307b68b36548"} Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.747680 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.882130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data\") pod \"9c4d107b-ac87-494b-8aeb-83d4488e934c\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.882225 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6k4\" (UniqueName: \"kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4\") pod \"9c4d107b-ac87-494b-8aeb-83d4488e934c\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.882305 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle\") pod \"9c4d107b-ac87-494b-8aeb-83d4488e934c\" (UID: \"9c4d107b-ac87-494b-8aeb-83d4488e934c\") " Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.896693 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4" (OuterVolumeSpecName: "kube-api-access-kw6k4") pod "9c4d107b-ac87-494b-8aeb-83d4488e934c" (UID: "9c4d107b-ac87-494b-8aeb-83d4488e934c"). InnerVolumeSpecName "kube-api-access-kw6k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.916467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c4d107b-ac87-494b-8aeb-83d4488e934c" (UID: "9c4d107b-ac87-494b-8aeb-83d4488e934c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.944659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data" (OuterVolumeSpecName: "config-data") pod "9c4d107b-ac87-494b-8aeb-83d4488e934c" (UID: "9c4d107b-ac87-494b-8aeb-83d4488e934c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.984381 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.984414 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c4d107b-ac87-494b-8aeb-83d4488e934c-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:03 crc kubenswrapper[4869]: I0314 09:19:03.984424 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6k4\" (UniqueName: \"kubernetes.io/projected/9c4d107b-ac87-494b-8aeb-83d4488e934c-kube-api-access-kw6k4\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.106667 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.171871 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.172162 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="dnsmasq-dns" containerID="cri-o://f2bf2532f97ec5b3b461c4d766cd6a25c9213301533ceb44a132269f5074bd24" gracePeriod=10 Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.432447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w9p7x" event={"ID":"9c4d107b-ac87-494b-8aeb-83d4488e934c","Type":"ContainerDied","Data":"d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848"} Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.432492 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b30cd4342d65af1769db7821e25369d529e84242c88f987b375b2df258a848" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.432565 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w9p7x" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.441611 4869 generic.go:334] "Generic (PLEG): container finished" podID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerID="f2bf2532f97ec5b3b461c4d766cd6a25c9213301533ceb44a132269f5074bd24" exitCode=0 Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.441672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" event={"ID":"e038eec8-d039-4436-a9af-3bd09cb8479f","Type":"ContainerDied","Data":"f2bf2532f97ec5b3b461c4d766cd6a25c9213301533ceb44a132269f5074bd24"} Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.567571 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599093 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f9km\" (UniqueName: \"kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.599657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config\") pod \"e038eec8-d039-4436-a9af-3bd09cb8479f\" (UID: \"e038eec8-d039-4436-a9af-3bd09cb8479f\") " Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.620173 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km" (OuterVolumeSpecName: "kube-api-access-9f9km") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "kube-api-access-9f9km". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.679462 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.706495 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f9km\" (UniqueName: \"kubernetes.io/projected/e038eec8-d039-4436-a9af-3bd09cb8479f-kube-api-access-9f9km\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.727530 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.749567 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.757588 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758022 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43206ff4-5f51-4c34-89af-92c875be15a7" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758035 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43206ff4-5f51-4c34-89af-92c875be15a7" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758049 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="dnsmasq-dns" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758056 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="dnsmasq-dns" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758066 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c088cd02-57e6-4c2b-b2bd-4eede2aa610e" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758073 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c088cd02-57e6-4c2b-b2bd-4eede2aa610e" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758086 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4d107b-ac87-494b-8aeb-83d4488e934c" containerName="keystone-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758092 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4d107b-ac87-494b-8aeb-83d4488e934c" containerName="keystone-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758110 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c3104a-93f8-416c-b18c-c5434a205595" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758115 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c3104a-93f8-416c-b18c-c5434a205595" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758125 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758130 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758142 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17eb787e-6879-4fde-896d-6d22cab6748e" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758147 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="17eb787e-6879-4fde-896d-6d22cab6748e" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758158 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="init" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758165 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="init" Mar 14 09:19:04 crc kubenswrapper[4869]: E0314 09:19:04.758175 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1e6856-cc32-474e-8623-48629ef12382" containerName="watcher-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758180 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1e6856-cc32-474e-8623-48629ef12382" containerName="watcher-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758387 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758401 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="17eb787e-6879-4fde-896d-6d22cab6748e" containerName="mariadb-database-create" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758412 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" containerName="dnsmasq-dns" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758418 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1e6856-cc32-474e-8623-48629ef12382" containerName="watcher-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758433 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43206ff4-5f51-4c34-89af-92c875be15a7" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758441 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c088cd02-57e6-4c2b-b2bd-4eede2aa610e" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4d107b-ac87-494b-8aeb-83d4488e934c" containerName="keystone-db-sync" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.758457 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c3104a-93f8-416c-b18c-c5434a205595" containerName="mariadb-account-create-update" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.762198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.794268 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9fn6h"] Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.795358 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.822103 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk6zl" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.822313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.822457 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.825064 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.828021 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.829399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.863335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.863488 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9fn6h"] Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.899744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.900764 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.900862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config" (OuterVolumeSpecName: "config") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.911750 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e038eec8-d039-4436-a9af-3bd09cb8479f" (UID: "e038eec8-d039-4436-a9af-3bd09cb8479f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.926863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.926953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9x9\" (UniqueName: \"kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.926995 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927037 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927082 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927123 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927146 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9kl\" (UniqueName: \"kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927384 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927397 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927409 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.927420 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e038eec8-d039-4436-a9af-3bd09cb8479f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.930749 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.931790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.935603 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-shpxz" Mar 14 09:19:04 crc kubenswrapper[4869]: I0314 09:19:04.935777 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.002740 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.005044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.009869 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9kl\" (UniqueName: \"kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.028492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.029406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.029810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.029863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.029949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.029977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030169 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x9x9\" (UniqueName: \"kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mxn\" (UniqueName: \"kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030245 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.030975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.035179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.036198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.039883 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.040630 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.041084 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.046252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.046625 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.060925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.062778 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.068940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.078397 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.082295 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.101573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x9x9\" (UniqueName: \"kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9\") pod \"keystone-bootstrap-9fn6h\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.108496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9kl\" (UniqueName: \"kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl\") pod \"dnsmasq-dns-f47c4bcff-7gd4c\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mxn\" (UniqueName: \"kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132343 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27wg\" (UniqueName: \"kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132753 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5nj\" (UniqueName: \"kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.132782 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.133891 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.142854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.146907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.162644 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.176274 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.199353 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mxn\" (UniqueName: \"kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn\") pod \"watcher-applier-0\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.216864 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jmtdx"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.218718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.224871 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rxgp" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.225139 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.225443 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.229399 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6sb7f"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27wg\" (UniqueName: \"kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235589 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5nj\" (UniqueName: \"kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235736 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.235767 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.240080 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.244754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.245060 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.254280 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.256170 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.256730 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.257876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.261604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jmtdx"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.261715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.268551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.269051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.276716 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.278445 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27wg\" (UniqueName: \"kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg\") pod \"watcher-api-0\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.279281 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.279562 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.279828 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cr2r7" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.303040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.308518 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6sb7f"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.308900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5nj\" (UniqueName: \"kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj\") pod \"watcher-decision-engine-0\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340598 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxwg\" (UniqueName: \"kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340656 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.340955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.367551 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.369691 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.379097 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.389730 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.389930 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.390043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.390241 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k4gqf" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.436581 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.442811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.442903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxwg\" (UniqueName: \"kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.442958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443075 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.443219 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6f8c\" (UniqueName: \"kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.444560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.463761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.464622 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.472293 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.478965 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kjlv8"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.480094 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.487969 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.488982 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.504847 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kjlv8"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.505031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cz7xs" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.512190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxwg\" (UniqueName: \"kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg\") pod \"cinder-db-sync-jmtdx\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6f8c\" (UniqueName: \"kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfjr\" (UniqueName: \"kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.544872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.549264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.557664 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.559553 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.560411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" event={"ID":"e038eec8-d039-4436-a9af-3bd09cb8479f","Type":"ContainerDied","Data":"6effe8a3d289e6d6015458b90c01cfca4298f7f76b2c6ddaf73790cee2c94724"} Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.560447 4869 scope.go:117] "RemoveContainer" containerID="f2bf2532f97ec5b3b461c4d766cd6a25c9213301533ceb44a132269f5074bd24" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.564145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.566575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.585313 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.626223 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.626570 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.629569 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6f8c\" (UniqueName: \"kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c\") pod \"neutron-db-sync-6sb7f\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zfjr\" (UniqueName: \"kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646459 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdcw7\" (UniqueName: \"kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646594 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.646727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.653477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.655341 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.676023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.678697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.700611 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jxvhl"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.702103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.733069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.733537 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.733686 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n6688" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.751587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdcw7\" (UniqueName: \"kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.754268 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.754338 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zfjr\" (UniqueName: \"kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr\") pod \"horizon-7558987fbf-ps5jx\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.754371 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.768861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.788360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.813975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdcw7\" (UniqueName: \"kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7\") pod \"barbican-db-sync-kjlv8\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.857999 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrgb\" (UniqueName: \"kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858111 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858135 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858150 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858230 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22wcp\" (UniqueName: \"kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.858314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.874133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.905722 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jxvhl"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.905757 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.905771 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.912713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.925015 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.926143 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.926227 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.926718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.930972 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.931219 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962619 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2d9\" (UniqueName: \"kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962800 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrgb\" (UniqueName: \"kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962933 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962951 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgzxf\" (UniqueName: \"kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.962985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.963008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.963027 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.964640 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.969673 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.973374 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.986312 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.992268 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:05 crc kubenswrapper[4869]: I0314 09:19:05.997623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.005159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.009105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019809 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019905 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019972 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.019997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.020028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22wcp\" (UniqueName: \"kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.040906 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrgb\" (UniqueName: \"kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.045270 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.061316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle\") pod \"placement-db-sync-jxvhl\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.080010 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.095964 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.097185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22wcp\" (UniqueName: \"kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp\") pod \"horizon-65f8d579f9-g6vsd\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.102900 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.103089 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.103359 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.124618 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghmv7" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgzxf\" (UniqueName: \"kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125753 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125916 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2d9\" (UniqueName: \"kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.125986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.126004 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.126052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.126079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.127101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.130938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.131197 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.131866 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.132543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.133105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.140061 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.156157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.170849 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.188964 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.189088 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.189681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgzxf\" (UniqueName: \"kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf\") pod \"dnsmasq-dns-6b869c6f79-cntzf\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.197001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.201103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.205166 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.205987 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.216181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.216314 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.222161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2d9\" (UniqueName: \"kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9\") pod \"ceilometer-0\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.276341 4869 scope.go:117] "RemoveContainer" containerID="7b95961f3686a9519b7f31bf1e2ec2841495aa8d649ecb04f563871eae09d699" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334644 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsh5v\" (UniqueName: \"kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.334928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335254 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47x5\" (UniqueName: \"kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.335518 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.383873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.434339 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jxvhl" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h47x5\" (UniqueName: \"kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsh5v\" (UniqueName: \"kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.439712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.453098 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.454557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.454920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.456195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.460848 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.462228 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.462965 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.464858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.469928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.473347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.474965 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.486778 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.505193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.509056 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.522371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.523955 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h47x5\" (UniqueName: \"kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.531271 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsh5v\" (UniqueName: \"kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.552927 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.575987 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.589269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.630970 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:19:06 crc kubenswrapper[4869]: I0314 09:19:06.991758 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:07 crc kubenswrapper[4869]: E0314 09:19:07.002162 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode038eec8_d039_4436_a9af_3bd09cb8479f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode038eec8_d039_4436_a9af_3bd09cb8479f.slice/crio-6effe8a3d289e6d6015458b90c01cfca4298f7f76b2c6ddaf73790cee2c94724\": RecentStats: unable to find data in memory cache]" Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.003755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.591636 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.597314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerStarted","Data":"1e083634d9bd314449c420356505e826a3fc74c8a2ff11bc888b5ec0a7c5d857"} Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.599105 4869 generic.go:334] "Generic (PLEG): container finished" podID="49d2b8b0-8ce6-4672-a909-777c61c75a66" containerID="988206da92681ab05e15768a7826f27a1be73d007adf8636dddd78bf282e6742" exitCode=0 Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.599146 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" event={"ID":"49d2b8b0-8ce6-4672-a909-777c61c75a66","Type":"ContainerDied","Data":"988206da92681ab05e15768a7826f27a1be73d007adf8636dddd78bf282e6742"} Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.599210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" event={"ID":"49d2b8b0-8ce6-4672-a909-777c61c75a66","Type":"ContainerStarted","Data":"016cb381537635a919b475d32e7d687b6f83b30c1a83677847d8942257a21855"} Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.604114 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.612154 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jmtdx"] Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.807358 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9fn6h"] Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.816127 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kjlv8"] Mar 14 09:19:07 crc kubenswrapper[4869]: W0314 09:19:07.832469 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eda9c72_2272_45c8_b843_1c2b3c27f709.slice/crio-d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e WatchSource:0}: Error finding container d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e: Status 404 returned error can't find the container with id d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e Mar 14 09:19:07 crc kubenswrapper[4869]: I0314 09:19:07.833413 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6sb7f"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.182016 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.350404 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.350646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.350703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm9kl\" (UniqueName: \"kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.350858 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.350893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.351048 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config\") pod \"49d2b8b0-8ce6-4672-a909-777c61c75a66\" (UID: \"49d2b8b0-8ce6-4672-a909-777c61c75a66\") " Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.365858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl" (OuterVolumeSpecName: "kube-api-access-bm9kl") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "kube-api-access-bm9kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.397281 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.427640 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config" (OuterVolumeSpecName: "config") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.428262 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.431948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.438476 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49d2b8b0-8ce6-4672-a909-777c61c75a66" (UID: "49d2b8b0-8ce6-4672-a909-777c61c75a66"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453831 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453874 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm9kl\" (UniqueName: \"kubernetes.io/projected/49d2b8b0-8ce6-4672-a909-777c61c75a66-kube-api-access-bm9kl\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453891 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453914 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453930 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.453945 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49d2b8b0-8ce6-4672-a909-777c61c75a66-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.493337 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jxvhl"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.529586 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:08 crc kubenswrapper[4869]: W0314 09:19:08.549671 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded19444d_bcb2_4703_9de9_14828f14fed1.slice/crio-b2d931c289ef917fa1be54a23cfa5c49159e76af5d2ee4afd89037370b19da8b WatchSource:0}: Error finding container b2d931c289ef917fa1be54a23cfa5c49159e76af5d2ee4afd89037370b19da8b: Status 404 returned error can't find the container with id b2d931c289ef917fa1be54a23cfa5c49159e76af5d2ee4afd89037370b19da8b Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.695085 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.706237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6e1103e5-8974-4f6f-8240-9f000114e32b","Type":"ContainerStarted","Data":"ed3a9bb658aa87b6409aa7e968e3388e63d40bd79c092433d49bf431410bb35e"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.770766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6sb7f" event={"ID":"8eda9c72-2272-45c8-b843-1c2b3c27f709","Type":"ContainerStarted","Data":"63afeaed1a472f127b732df459b006e941361f28145823c009d0f9d940099676"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.770827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6sb7f" event={"ID":"8eda9c72-2272-45c8-b843-1c2b3c27f709","Type":"ContainerStarted","Data":"d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.823810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.868955 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerStarted","Data":"5689f3f1b1d1d84a5456cf90f3358af2de90a04b676e303cb10886f0bdd60290"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.917492 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.926725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kjlv8" event={"ID":"34747e66-40bd-4676-9d8e-673fb09120c0","Type":"ContainerStarted","Data":"1f63c43ebe907e6da699558c0a21de5cd27097e32487b94dcfb1a1ba21032c83"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.955586 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.957931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerStarted","Data":"b2d931c289ef917fa1be54a23cfa5c49159e76af5d2ee4afd89037370b19da8b"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.963832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1ffadc1-b64b-4763-a8b9-b5047caf3166","Type":"ContainerStarted","Data":"8508764cf525b76ec32be94f08a8c1ba879cd71a851ca511ae3e5017971d0c8d"} Mar 14 09:19:08 crc kubenswrapper[4869]: I0314 09:19:08.967230 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.000125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerStarted","Data":"ce4ed8defe687a6df6f01798d7371508e629beeb0f8de9da2c16edf7df5765cc"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.000178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerStarted","Data":"0e807146af10c5e469efb95a0d88eabbc056c22e198ecdc2d08809d58cb82f8a"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.001559 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.003839 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.007268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" event={"ID":"49d2b8b0-8ce6-4672-a909-777c61c75a66","Type":"ContainerDied","Data":"016cb381537635a919b475d32e7d687b6f83b30c1a83677847d8942257a21855"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.007306 4869 scope.go:117] "RemoveContainer" containerID="988206da92681ab05e15768a7826f27a1be73d007adf8636dddd78bf282e6742" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.007436 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f47c4bcff-7gd4c" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.024460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jmtdx" event={"ID":"5806f1f4-83ae-4f76-ba42-f4943cbef129","Type":"ContainerStarted","Data":"d513a6790f1602a25f96ef0385b25181addc525a583eaba2b2ecb5fae3a61722"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.040093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9fn6h" event={"ID":"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf","Type":"ContainerStarted","Data":"7de8d92e14a9f466f55da25c1007ed91c74b49efab49ce1a891348d1a268f783"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.040136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9fn6h" event={"ID":"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf","Type":"ContainerStarted","Data":"da48706441797a3e13844dabc040f6d49ccb27da7a314f246bf1ab250775d1c2"} Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.045330 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.084736 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.114039 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.129429 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:19:09 crc kubenswrapper[4869]: E0314 09:19:09.130128 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d2b8b0-8ce6-4672-a909-777c61c75a66" containerName="init" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.130153 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d2b8b0-8ce6-4672-a909-777c61c75a66" containerName="init" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.130368 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d2b8b0-8ce6-4672-a909-777c61c75a66" containerName="init" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.135231 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.137896 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.141402 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6sb7f" podStartSLOduration=4.141383368 podStartE2EDuration="4.141383368s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:08.860357124 +0000 UTC m=+1301.832639197" watchObservedRunningTime="2026-03-14 09:19:09.141383368 +0000 UTC m=+1302.113665421" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.159914 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.163690 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.171451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.17142707 podStartE2EDuration="5.17142707s" podCreationTimestamp="2026-03-14 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:09.041395931 +0000 UTC m=+1302.013677984" watchObservedRunningTime="2026-03-14 09:19:09.17142707 +0000 UTC m=+1302.143709133" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.180644 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.180733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.180808 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.180852 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.180901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwd49\" (UniqueName: \"kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.191155 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.206370 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f47c4bcff-7gd4c"] Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.206932 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9fn6h" podStartSLOduration=5.206916505 podStartE2EDuration="5.206916505s" podCreationTimestamp="2026-03-14 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:09.13211904 +0000 UTC m=+1302.104401103" watchObservedRunningTime="2026-03-14 09:19:09.206916505 +0000 UTC m=+1302.179198558" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.282392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwd49\" (UniqueName: \"kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.282538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.282558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.282604 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.282635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.283582 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.283678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.284699 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.290725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.303051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwd49\" (UniqueName: \"kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49\") pod \"horizon-5858c9f6c-clfct\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.475285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:09 crc kubenswrapper[4869]: I0314 09:19:09.761563 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d2b8b0-8ce6-4672-a909-777c61c75a66" path="/var/lib/kubelet/pods/49d2b8b0-8ce6-4672-a909-777c61c75a66/volumes" Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.037195 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.088024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jxvhl" event={"ID":"e612c02e-1383-4a14-9267-e1742cb95cc7","Type":"ContainerStarted","Data":"9db6936d5a2a8785e294d2918964ca325b11d2939cc302e03aa58d1620894cad"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.090761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerStarted","Data":"bd09464a0026461d36cf96479207e80fdd948efe7c93db3ed6e52998f13eec17"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.112008 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65f8d579f9-g6vsd" event={"ID":"369e3d1c-e33f-46d8-8a70-8d43f4df8878","Type":"ContainerStarted","Data":"213af1812b558cb2eea8d33ff8624a93f9d3645f90296de329d26f3483caf503"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.122467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerStarted","Data":"e856645248b7c3a3eb211f61bc1e7dfa3bc5a134ce8c170e40f38824358f68cc"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.139940 4869 generic.go:334] "Generic (PLEG): container finished" podID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerID="89ac629ee8f19097ea11745c28f503e92f7315fda111195b229df18b85d22fff" exitCode=0 Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.140447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" event={"ID":"d0a3057f-b699-4f14-bfa0-7bda292b3c82","Type":"ContainerDied","Data":"89ac629ee8f19097ea11745c28f503e92f7315fda111195b229df18b85d22fff"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.140521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" event={"ID":"d0a3057f-b699-4f14-bfa0-7bda292b3c82","Type":"ContainerStarted","Data":"816348e27aae2facdbb375ba1001fd7353da2207b0a1c3b1e189f9ffc84b11be"} Mar 14 09:19:10 crc kubenswrapper[4869]: I0314 09:19:10.341964 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.156994 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.157606 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api-log" containerID="cri-o://0e807146af10c5e469efb95a0d88eabbc056c22e198ecdc2d08809d58cb82f8a" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.158274 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" containerID="cri-o://ce4ed8defe687a6df6f01798d7371508e629beeb0f8de9da2c16edf7df5765cc" gracePeriod=30 Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.170597 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": EOF" Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.171092 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": EOF" Mar 14 09:19:11 crc kubenswrapper[4869]: I0314 09:19:11.181559 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": EOF" Mar 14 09:19:12 crc kubenswrapper[4869]: I0314 09:19:12.172489 4869 generic.go:334] "Generic (PLEG): container finished" podID="43ee2085-138c-40e9-a964-85029ba0d51b" containerID="0e807146af10c5e469efb95a0d88eabbc056c22e198ecdc2d08809d58cb82f8a" exitCode=143 Mar 14 09:19:12 crc kubenswrapper[4869]: I0314 09:19:12.172613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerDied","Data":"0e807146af10c5e469efb95a0d88eabbc056c22e198ecdc2d08809d58cb82f8a"} Mar 14 09:19:12 crc kubenswrapper[4869]: I0314 09:19:12.174472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerStarted","Data":"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda"} Mar 14 09:19:12 crc kubenswrapper[4869]: W0314 09:19:12.931126 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda87045a9_e2a7_4c0e_b98e_7684cdfb6a62.slice/crio-f6121c58f5e4bc4214a6801e73be931768dc328afb602953968514e5b2fb6cdc WatchSource:0}: Error finding container f6121c58f5e4bc4214a6801e73be931768dc328afb602953968514e5b2fb6cdc: Status 404 returned error can't find the container with id f6121c58f5e4bc4214a6801e73be931768dc328afb602953968514e5b2fb6cdc Mar 14 09:19:13 crc kubenswrapper[4869]: I0314 09:19:13.187254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerStarted","Data":"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96"} Mar 14 09:19:13 crc kubenswrapper[4869]: I0314 09:19:13.188531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerStarted","Data":"f6121c58f5e4bc4214a6801e73be931768dc328afb602953968514e5b2fb6cdc"} Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.009310 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.051025 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b646449c6-8g8ql"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.063677 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.066387 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.073338 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b646449c6-8g8ql"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c776b1be-07b2-4de0-808f-48c9a550aaa4-logs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-combined-ca-bundle\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9gn\" (UniqueName: \"kubernetes.io/projected/c776b1be-07b2-4de0-808f-48c9a550aaa4-kube-api-access-xb9gn\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-scripts\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-secret-key\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126787 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-config-data\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.126860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-tls-certs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.166729 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.208246 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9d48d6888-26pm7"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.213982 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-secret-key\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-config-data\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228673 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-tls-certs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c776b1be-07b2-4de0-808f-48c9a550aaa4-logs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-combined-ca-bundle\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228740 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9gn\" (UniqueName: \"kubernetes.io/projected/c776b1be-07b2-4de0-808f-48c9a550aaa4-kube-api-access-xb9gn\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.228776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-scripts\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.229405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-scripts\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.230011 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c776b1be-07b2-4de0-808f-48c9a550aaa4-logs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.231016 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c776b1be-07b2-4de0-808f-48c9a550aaa4-config-data\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.235034 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d48d6888-26pm7"] Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.235660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-secret-key\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.235898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-horizon-tls-certs\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.254344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9gn\" (UniqueName: \"kubernetes.io/projected/c776b1be-07b2-4de0-808f-48c9a550aaa4-kube-api-access-xb9gn\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.255628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c776b1be-07b2-4de0-808f-48c9a550aaa4-combined-ca-bundle\") pod \"horizon-6b646449c6-8g8ql\" (UID: \"c776b1be-07b2-4de0-808f-48c9a550aaa4\") " pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.285805 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": read tcp 10.217.0.2:48668->10.217.0.155:9322: read: connection reset by peer" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.330273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-tls-certs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.330928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-scripts\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.330982 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90750956-6a92-4c2c-8213-07cd62712ba1-logs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.331100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-secret-key\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.331173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-combined-ca-bundle\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.331223 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-config-data\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.331249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcdp\" (UniqueName: \"kubernetes.io/projected/90750956-6a92-4c2c-8213-07cd62712ba1-kube-api-access-wzcdp\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.404235 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.432827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90750956-6a92-4c2c-8213-07cd62712ba1-logs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.432939 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-secret-key\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.432985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-combined-ca-bundle\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.433013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-config-data\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.433567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90750956-6a92-4c2c-8213-07cd62712ba1-logs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.433030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcdp\" (UniqueName: \"kubernetes.io/projected/90750956-6a92-4c2c-8213-07cd62712ba1-kube-api-access-wzcdp\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.434662 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-tls-certs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.434733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-scripts\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.434746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-config-data\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.435484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90750956-6a92-4c2c-8213-07cd62712ba1-scripts\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.438662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-combined-ca-bundle\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.443583 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-secret-key\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.443973 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90750956-6a92-4c2c-8213-07cd62712ba1-horizon-tls-certs\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.452751 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcdp\" (UniqueName: \"kubernetes.io/projected/90750956-6a92-4c2c-8213-07cd62712ba1-kube-api-access-wzcdp\") pod \"horizon-9d48d6888-26pm7\" (UID: \"90750956-6a92-4c2c-8213-07cd62712ba1\") " pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:14 crc kubenswrapper[4869]: I0314 09:19:14.538410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:15 crc kubenswrapper[4869]: I0314 09:19:15.218860 4869 generic.go:334] "Generic (PLEG): container finished" podID="43ee2085-138c-40e9-a964-85029ba0d51b" containerID="ce4ed8defe687a6df6f01798d7371508e629beeb0f8de9da2c16edf7df5765cc" exitCode=0 Mar 14 09:19:15 crc kubenswrapper[4869]: I0314 09:19:15.218903 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerDied","Data":"ce4ed8defe687a6df6f01798d7371508e629beeb0f8de9da2c16edf7df5765cc"} Mar 14 09:19:15 crc kubenswrapper[4869]: I0314 09:19:15.342100 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": dial tcp 10.217.0.155:9322: connect: connection refused" Mar 14 09:19:18 crc kubenswrapper[4869]: I0314 09:19:18.249376 4869 generic.go:334] "Generic (PLEG): container finished" podID="d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" containerID="7de8d92e14a9f466f55da25c1007ed91c74b49efab49ce1a891348d1a268f783" exitCode=0 Mar 14 09:19:18 crc kubenswrapper[4869]: I0314 09:19:18.249450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9fn6h" event={"ID":"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf","Type":"ContainerDied","Data":"7de8d92e14a9f466f55da25c1007ed91c74b49efab49ce1a891348d1a268f783"} Mar 14 09:19:23 crc kubenswrapper[4869]: E0314 09:19:23.363878 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:19:23 crc kubenswrapper[4869]: E0314 09:19:23.365012 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-horizon:watcher_latest" Mar 14 09:19:23 crc kubenswrapper[4869]: E0314 09:19:23.365364 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.153:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc8h5b9h67dh558h665hch55bh5fh674hb9h548h5f9h679h675h7bh5c8h594h6chd8h58bh76h98h646h65ch8fh7fh57fhf9h5d5h69h5bbh568q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-22wcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65f8d579f9-g6vsd_openstack(369e3d1c-e33f-46d8-8a70-8d43f4df8878): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:23 crc kubenswrapper[4869]: E0314 09:19:23.369729 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-65f8d579f9-g6vsd" podUID="369e3d1c-e33f-46d8-8a70-8d43f4df8878" Mar 14 09:19:25 crc kubenswrapper[4869]: I0314 09:19:25.342593 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.054838 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.060096 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.138931 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27wg\" (UniqueName: \"kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg\") pod \"43ee2085-138c-40e9-a964-85029ba0d51b\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.139032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data\") pod \"43ee2085-138c-40e9-a964-85029ba0d51b\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.139231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca\") pod \"43ee2085-138c-40e9-a964-85029ba0d51b\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.139266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle\") pod \"43ee2085-138c-40e9-a964-85029ba0d51b\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.139328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs\") pod \"43ee2085-138c-40e9-a964-85029ba0d51b\" (UID: \"43ee2085-138c-40e9-a964-85029ba0d51b\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.140093 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs" (OuterVolumeSpecName: "logs") pod "43ee2085-138c-40e9-a964-85029ba0d51b" (UID: "43ee2085-138c-40e9-a964-85029ba0d51b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.144546 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg" (OuterVolumeSpecName: "kube-api-access-h27wg") pod "43ee2085-138c-40e9-a964-85029ba0d51b" (UID: "43ee2085-138c-40e9-a964-85029ba0d51b"). InnerVolumeSpecName "kube-api-access-h27wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.169277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "43ee2085-138c-40e9-a964-85029ba0d51b" (UID: "43ee2085-138c-40e9-a964-85029ba0d51b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.179452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43ee2085-138c-40e9-a964-85029ba0d51b" (UID: "43ee2085-138c-40e9-a964-85029ba0d51b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.194102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data" (OuterVolumeSpecName: "config-data") pod "43ee2085-138c-40e9-a964-85029ba0d51b" (UID: "43ee2085-138c-40e9-a964-85029ba0d51b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.240856 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.240919 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x9x9\" (UniqueName: \"kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241225 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241256 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data\") pod \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\" (UID: \"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf\") " Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241704 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ee2085-138c-40e9-a964-85029ba0d51b-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241724 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h27wg\" (UniqueName: \"kubernetes.io/projected/43ee2085-138c-40e9-a964-85029ba0d51b-kube-api-access-h27wg\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241734 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241743 4869 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.241751 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ee2085-138c-40e9-a964-85029ba0d51b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.244959 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.245835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts" (OuterVolumeSpecName: "scripts") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.247167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.247893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9" (OuterVolumeSpecName: "kube-api-access-2x9x9") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "kube-api-access-2x9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.266711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data" (OuterVolumeSpecName: "config-data") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.269859 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" (UID: "d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.343840 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.155:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344090 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x9x9\" (UniqueName: \"kubernetes.io/projected/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-kube-api-access-2x9x9\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344121 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344132 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344142 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344151 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.344158 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.383787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43ee2085-138c-40e9-a964-85029ba0d51b","Type":"ContainerDied","Data":"1e083634d9bd314449c420356505e826a3fc74c8a2ff11bc888b5ec0a7c5d857"} Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.383811 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.383850 4869 scope.go:117] "RemoveContainer" containerID="ce4ed8defe687a6df6f01798d7371508e629beeb0f8de9da2c16edf7df5765cc" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.390864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9fn6h" event={"ID":"d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf","Type":"ContainerDied","Data":"da48706441797a3e13844dabc040f6d49ccb27da7a314f246bf1ab250775d1c2"} Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.390898 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da48706441797a3e13844dabc040f6d49ccb27da7a314f246bf1ab250775d1c2" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.390921 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9fn6h" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.427772 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.443380 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454156 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:30 crc kubenswrapper[4869]: E0314 09:19:30.454552 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" containerName="keystone-bootstrap" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454591 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" containerName="keystone-bootstrap" Mar 14 09:19:30 crc kubenswrapper[4869]: E0314 09:19:30.454610 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454616 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" Mar 14 09:19:30 crc kubenswrapper[4869]: E0314 09:19:30.454627 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api-log" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454633 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api-log" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454826 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api-log" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" containerName="watcher-api" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.454857 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" containerName="keystone-bootstrap" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.456132 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.458766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.467919 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.549722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.549790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.549872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.549914 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.549931 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55lzc\" (UniqueName: \"kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.651375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.651448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.651473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55lzc\" (UniqueName: \"kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.651592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.651650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.652073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.656570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.659176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.662841 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.668555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55lzc\" (UniqueName: \"kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc\") pod \"watcher-api-0\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " pod="openstack/watcher-api-0" Mar 14 09:19:30 crc kubenswrapper[4869]: I0314 09:19:30.782258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.166604 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9fn6h"] Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.177119 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9fn6h"] Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.276737 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t7bw5"] Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.278584 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.282664 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.282675 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk6zl" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.282797 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.283201 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.283923 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.302248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t7bw5"] Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.367729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.367789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.367966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.367990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcr99\" (UniqueName: \"kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.368066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.368092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.470359 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.470927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.470957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcr99\" (UniqueName: \"kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.471000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.471031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.471077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.477022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.477648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.477795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.483197 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.488126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.491241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcr99\" (UniqueName: \"kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99\") pod \"keystone-bootstrap-t7bw5\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.597567 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.715893 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ee2085-138c-40e9-a964-85029ba0d51b" path="/var/lib/kubelet/pods/43ee2085-138c-40e9-a964-85029ba0d51b/volumes" Mar 14 09:19:31 crc kubenswrapper[4869]: I0314 09:19:31.716734 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf" path="/var/lib/kubelet/pods/d9e17bd9-dc6f-4045-9aa9-aa33013c7eaf/volumes" Mar 14 09:19:35 crc kubenswrapper[4869]: I0314 09:19:35.659833 4869 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode038eec8-d039-4436-a9af-3bd09cb8479f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode038eec8-d039-4436-a9af-3bd09cb8479f] : Timed out while waiting for systemd to remove kubepods-besteffort-pode038eec8_d039_4436_a9af_3bd09cb8479f.slice" Mar 14 09:19:35 crc kubenswrapper[4869]: E0314 09:19:35.660312 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pode038eec8-d039-4436-a9af-3bd09cb8479f] : unable to destroy cgroup paths for cgroup [kubepods besteffort pode038eec8-d039-4436-a9af-3bd09cb8479f] : Timed out while waiting for systemd to remove kubepods-besteffort-pode038eec8_d039_4436_a9af_3bd09cb8479f.slice" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" Mar 14 09:19:36 crc kubenswrapper[4869]: I0314 09:19:36.449071 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d575b8c75-7rrrn" Mar 14 09:19:36 crc kubenswrapper[4869]: I0314 09:19:36.494481 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:19:36 crc kubenswrapper[4869]: I0314 09:19:36.505796 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d575b8c75-7rrrn"] Mar 14 09:19:37 crc kubenswrapper[4869]: I0314 09:19:37.715303 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e038eec8-d039-4436-a9af-3bd09cb8479f" path="/var/lib/kubelet/pods/e038eec8-d039-4436-a9af-3bd09cb8479f/volumes" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.565869 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640189 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data\") pod \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key\") pod \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs\") pod \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts\") pod \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640494 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22wcp\" (UniqueName: \"kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp\") pod \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\" (UID: \"369e3d1c-e33f-46d8-8a70-8d43f4df8878\") " Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640760 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs" (OuterVolumeSpecName: "logs") pod "369e3d1c-e33f-46d8-8a70-8d43f4df8878" (UID: "369e3d1c-e33f-46d8-8a70-8d43f4df8878"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.640959 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts" (OuterVolumeSpecName: "scripts") pod "369e3d1c-e33f-46d8-8a70-8d43f4df8878" (UID: "369e3d1c-e33f-46d8-8a70-8d43f4df8878"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.641032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data" (OuterVolumeSpecName: "config-data") pod "369e3d1c-e33f-46d8-8a70-8d43f4df8878" (UID: "369e3d1c-e33f-46d8-8a70-8d43f4df8878"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.641178 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369e3d1c-e33f-46d8-8a70-8d43f4df8878-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.641204 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.644506 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "369e3d1c-e33f-46d8-8a70-8d43f4df8878" (UID: "369e3d1c-e33f-46d8-8a70-8d43f4df8878"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.645546 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp" (OuterVolumeSpecName: "kube-api-access-22wcp") pod "369e3d1c-e33f-46d8-8a70-8d43f4df8878" (UID: "369e3d1c-e33f-46d8-8a70-8d43f4df8878"). InnerVolumeSpecName "kube-api-access-22wcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.742202 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22wcp\" (UniqueName: \"kubernetes.io/projected/369e3d1c-e33f-46d8-8a70-8d43f4df8878-kube-api-access-22wcp\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.742239 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/369e3d1c-e33f-46d8-8a70-8d43f4df8878-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:38 crc kubenswrapper[4869]: I0314 09:19:38.742251 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/369e3d1c-e33f-46d8-8a70-8d43f4df8878-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:39 crc kubenswrapper[4869]: I0314 09:19:39.477306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65f8d579f9-g6vsd" event={"ID":"369e3d1c-e33f-46d8-8a70-8d43f4df8878","Type":"ContainerDied","Data":"213af1812b558cb2eea8d33ff8624a93f9d3645f90296de329d26f3483caf503"} Mar 14 09:19:39 crc kubenswrapper[4869]: I0314 09:19:39.477361 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65f8d579f9-g6vsd" Mar 14 09:19:39 crc kubenswrapper[4869]: I0314 09:19:39.551023 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:39 crc kubenswrapper[4869]: I0314 09:19:39.558714 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65f8d579f9-g6vsd"] Mar 14 09:19:39 crc kubenswrapper[4869]: I0314 09:19:39.716161 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369e3d1c-e33f-46d8-8a70-8d43f4df8878" path="/var/lib/kubelet/pods/369e3d1c-e33f-46d8-8a70-8d43f4df8878/volumes" Mar 14 09:19:41 crc kubenswrapper[4869]: E0314 09:19:41.894190 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Mar 14 09:19:41 crc kubenswrapper[4869]: E0314 09:19:41.894608 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Mar 14 09:19:41 crc kubenswrapper[4869]: E0314 09:19:41.894871 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.153:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n666hc5h76h5cdh68ch8fh65ch66ch54fh64ch85h66h5dbh58ch5f6h79h645h648h657h687h699h96h67ch56h575hbch5dbh584h5c7h5cdhdch665q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb2d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(60f01a69-1d04-4788-b13d-f944b5f37b06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:42 crc kubenswrapper[4869]: E0314 09:19:42.360045 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 14 09:19:42 crc kubenswrapper[4869]: E0314 09:19:42.360097 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Mar 14 09:19:42 crc kubenswrapper[4869]: E0314 09:19:42.360205 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.153:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdcw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kjlv8_openstack(34747e66-40bd-4676-9d8e-673fb09120c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:42 crc kubenswrapper[4869]: E0314 09:19:42.361460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kjlv8" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" Mar 14 09:19:42 crc kubenswrapper[4869]: E0314 09:19:42.513717 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-kjlv8" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" Mar 14 09:19:43 crc kubenswrapper[4869]: E0314 09:19:43.527035 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 14 09:19:43 crc kubenswrapper[4869]: E0314 09:19:43.527332 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Mar 14 09:19:43 crc kubenswrapper[4869]: E0314 09:19:43.527545 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.153:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwxwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jmtdx_openstack(5806f1f4-83ae-4f76-ba42-f4943cbef129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:19:43 crc kubenswrapper[4869]: E0314 09:19:43.529087 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jmtdx" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" Mar 14 09:19:43 crc kubenswrapper[4869]: I0314 09:19:43.534019 4869 scope.go:117] "RemoveContainer" containerID="0e807146af10c5e469efb95a0d88eabbc056c22e198ecdc2d08809d58cb82f8a" Mar 14 09:19:43 crc kubenswrapper[4869]: I0314 09:19:43.971312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d48d6888-26pm7"] Mar 14 09:19:43 crc kubenswrapper[4869]: W0314 09:19:43.988560 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90750956_6a92_4c2c_8213_07cd62712ba1.slice/crio-f34461d9dc1e4f4c8eb074c44d23ea76178dd31c1ae2adb7ac9d30ed76a4aecf WatchSource:0}: Error finding container f34461d9dc1e4f4c8eb074c44d23ea76178dd31c1ae2adb7ac9d30ed76a4aecf: Status 404 returned error can't find the container with id f34461d9dc1e4f4c8eb074c44d23ea76178dd31c1ae2adb7ac9d30ed76a4aecf Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.092225 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b646449c6-8g8ql"] Mar 14 09:19:44 crc kubenswrapper[4869]: W0314 09:19:44.371989 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc776b1be_07b2_4de0_808f_48c9a550aaa4.slice/crio-3df5c8d258ebfe7b29c9d6eb214094614c0a1a0d21bad721786ef5a96a38c792 WatchSource:0}: Error finding container 3df5c8d258ebfe7b29c9d6eb214094614c0a1a0d21bad721786ef5a96a38c792: Status 404 returned error can't find the container with id 3df5c8d258ebfe7b29c9d6eb214094614c0a1a0d21bad721786ef5a96a38c792 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.422011 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:44 crc kubenswrapper[4869]: W0314 09:19:44.455893 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60619289_ef81_4fff_aacb_066eaa937f4f.slice/crio-14065d056595ef4381c41da8d7e16bd6b59fac0367f4fd0b484d30e169fc0de7 WatchSource:0}: Error finding container 14065d056595ef4381c41da8d7e16bd6b59fac0367f4fd0b484d30e169fc0de7: Status 404 returned error can't find the container with id 14065d056595ef4381c41da8d7e16bd6b59fac0367f4fd0b484d30e169fc0de7 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.500058 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t7bw5"] Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.533794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"f34461d9dc1e4f4c8eb074c44d23ea76178dd31c1ae2adb7ac9d30ed76a4aecf"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.536871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerStarted","Data":"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.536950 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-log" containerID="cri-o://408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" gracePeriod=30 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.536972 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-httpd" containerID="cri-o://24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" gracePeriod=30 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.539157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t7bw5" event={"ID":"ba55bdd0-5e03-45de-820b-59194effebf1","Type":"ContainerStarted","Data":"4696c930477ca0dad718aeff2bfb21ec8121a5b2b6939b58a8e8c2effdc8e4e4"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.540640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerStarted","Data":"14065d056595ef4381c41da8d7e16bd6b59fac0367f4fd0b484d30e169fc0de7"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.545592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerStarted","Data":"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.545754 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-log" containerID="cri-o://5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" gracePeriod=30 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.546175 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-httpd" containerID="cri-o://88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" gracePeriod=30 Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.551150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"3df5c8d258ebfe7b29c9d6eb214094614c0a1a0d21bad721786ef5a96a38c792"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.568013 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" event={"ID":"d0a3057f-b699-4f14-bfa0-7bda292b3c82","Type":"ContainerStarted","Data":"cb9656cbe4b554a608da488ef7353dcb98b0c014c1540780279fb41d9f3d109b"} Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.568046 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.569049 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=39.569033007 podStartE2EDuration="39.569033007s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:44.559103813 +0000 UTC m=+1337.531385876" watchObservedRunningTime="2026-03-14 09:19:44.569033007 +0000 UTC m=+1337.541315050" Mar 14 09:19:44 crc kubenswrapper[4869]: E0314 09:19:44.588208 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-jmtdx" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.591206 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=39.591191864 podStartE2EDuration="39.591191864s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:44.589694328 +0000 UTC m=+1337.561976401" watchObservedRunningTime="2026-03-14 09:19:44.591191864 +0000 UTC m=+1337.563473917" Mar 14 09:19:44 crc kubenswrapper[4869]: I0314 09:19:44.615658 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" podStartSLOduration=39.615640138 podStartE2EDuration="39.615640138s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:44.60682819 +0000 UTC m=+1337.579110263" watchObservedRunningTime="2026-03-14 09:19:44.615640138 +0000 UTC m=+1337.587922191" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.060443 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100079 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100274 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsh5v\" (UniqueName: \"kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.100468 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.101087 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs\") pod \"85422929-62ea-468f-8f33-5c663a915aac\" (UID: \"85422929-62ea-468f-8f33-5c663a915aac\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.101363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.101564 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.102165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs" (OuterVolumeSpecName: "logs") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.109338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v" (OuterVolumeSpecName: "kube-api-access-xsh5v") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "kube-api-access-xsh5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.122110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.128144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts" (OuterVolumeSpecName: "scripts") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.205841 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85422929-62ea-468f-8f33-5c663a915aac-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.205895 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.205907 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsh5v\" (UniqueName: \"kubernetes.io/projected/85422929-62ea-468f-8f33-5c663a915aac-kube-api-access-xsh5v\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.205919 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.239417 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307137 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h47x5\" (UniqueName: \"kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.307358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs\") pod \"b179f677-b76f-4f8c-813c-e80ddbca8632\" (UID: \"b179f677-b76f-4f8c-813c-e80ddbca8632\") " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.308028 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.308283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs" (OuterVolumeSpecName: "logs") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.308354 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.340382 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.340890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.340920 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5" (OuterVolumeSpecName: "kube-api-access-h47x5") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "kube-api-access-h47x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.349834 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts" (OuterVolumeSpecName: "scripts") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.390887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410268 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410289 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410299 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h47x5\" (UniqueName: \"kubernetes.io/projected/b179f677-b76f-4f8c-813c-e80ddbca8632-kube-api-access-h47x5\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410320 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410329 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.410341 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b179f677-b76f-4f8c-813c-e80ddbca8632-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.413408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.430445 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.440462 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.501683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data" (OuterVolumeSpecName: "config-data") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.502624 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b179f677-b76f-4f8c-813c-e80ddbca8632" (UID: "b179f677-b76f-4f8c-813c-e80ddbca8632"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.511201 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.511226 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.511237 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.511246 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b179f677-b76f-4f8c-813c-e80ddbca8632-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.511256 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.527880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data" (OuterVolumeSpecName: "config-data") pod "85422929-62ea-468f-8f33-5c663a915aac" (UID: "85422929-62ea-468f-8f33-5c663a915aac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.602281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerStarted","Data":"050e858f30980bf63823f13a4d44bbaa98479252cd1164bba19f482f360487aa"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.602324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerStarted","Data":"76b80c712aec5b31ce3165b6defd2939fa432bc2b9ea72c8f6c8a57fff2da6ff"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.602436 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7558987fbf-ps5jx" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon-log" containerID="cri-o://76b80c712aec5b31ce3165b6defd2939fa432bc2b9ea72c8f6c8a57fff2da6ff" gracePeriod=30 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.602998 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7558987fbf-ps5jx" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon" containerID="cri-o://050e858f30980bf63823f13a4d44bbaa98479252cd1164bba19f482f360487aa" gracePeriod=30 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.609495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1ffadc1-b64b-4763-a8b9-b5047caf3166","Type":"ContainerStarted","Data":"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.612825 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85422929-62ea-468f-8f33-5c663a915aac-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.617436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerStarted","Data":"8a942069f0a6d4aa7de9d672d56ea160d9105ed42bfc6bbc897459950bedf3c1"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.617474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerStarted","Data":"55a0385708234f7f5103bc033e67b83693570f78878c6fa43452091b7ab5befd"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.618481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.619693 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": dial tcp 10.217.0.170:9322: connect: connection refused" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.622877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jxvhl" event={"ID":"e612c02e-1383-4a14-9267-e1742cb95cc7","Type":"ContainerStarted","Data":"666c3fcdee0f3d60ce35f3dd71b484228a4d2c0c4b433f74a33e9da8a140605f"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.626183 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7558987fbf-ps5jx" podStartSLOduration=6.911591233 podStartE2EDuration="40.626157002s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="2026-03-14 09:19:08.655264863 +0000 UTC m=+1301.627546916" lastFinishedPulling="2026-03-14 09:19:42.369830632 +0000 UTC m=+1335.342112685" observedRunningTime="2026-03-14 09:19:45.623976379 +0000 UTC m=+1338.596258432" watchObservedRunningTime="2026-03-14 09:19:45.626157002 +0000 UTC m=+1338.598439055" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.627994 4869 generic.go:334] "Generic (PLEG): container finished" podID="85422929-62ea-468f-8f33-5c663a915aac" containerID="88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" exitCode=143 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628020 4869 generic.go:334] "Generic (PLEG): container finished" podID="85422929-62ea-468f-8f33-5c663a915aac" containerID="5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" exitCode=143 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628063 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerDied","Data":"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerDied","Data":"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85422929-62ea-468f-8f33-5c663a915aac","Type":"ContainerDied","Data":"bd09464a0026461d36cf96479207e80fdd948efe7c93db3ed6e52998f13eec17"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628117 4869 scope.go:117] "RemoveContainer" containerID="88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.628234 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.646919 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"2eca18a017dad05c45b59073bf8100704a59b4c6333a251e25984d27edaa4aa6"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.655730 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=15.655715183 podStartE2EDuration="15.655715183s" podCreationTimestamp="2026-03-14 09:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:45.646559416 +0000 UTC m=+1338.618841469" watchObservedRunningTime="2026-03-14 09:19:45.655715183 +0000 UTC m=+1338.627997246" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.656258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerStarted","Data":"d9439a2f209adf9d7b1d2bb9a0a3cff8f81229588faa1acc51d30baebcee1776"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.661347 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t7bw5" event={"ID":"ba55bdd0-5e03-45de-820b-59194effebf1","Type":"ContainerStarted","Data":"40ba9f6148f7aaac7f94702a995b632193b84b96097df26f9f59e7d37b78357b"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.665178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerStarted","Data":"0a0656a73f164123e855b5c5c183c393d6c6cc2bea2ca5dcb0de4b3987676f33"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.670135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6e1103e5-8974-4f6f-8240-9f000114e32b","Type":"ContainerStarted","Data":"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.672018 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"5600744ca495f899d07e3e15ba32827056fcc8d7bb2a76e7a2de37871c649430"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.677704 4869 generic.go:334] "Generic (PLEG): container finished" podID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerID="24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" exitCode=143 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.677734 4869 generic.go:334] "Generic (PLEG): container finished" podID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerID="408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" exitCode=143 Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.677801 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.999086482 podStartE2EDuration="41.677789716s" podCreationTimestamp="2026-03-14 09:19:04 +0000 UTC" firstStartedPulling="2026-03-14 09:19:08.654938505 +0000 UTC m=+1301.627220558" lastFinishedPulling="2026-03-14 09:19:42.333641739 +0000 UTC m=+1335.305923792" observedRunningTime="2026-03-14 09:19:45.665414011 +0000 UTC m=+1338.637696084" watchObservedRunningTime="2026-03-14 09:19:45.677789716 +0000 UTC m=+1338.650071759" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.678284 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.678536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerDied","Data":"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.678571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerDied","Data":"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.678583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b179f677-b76f-4f8c-813c-e80ddbca8632","Type":"ContainerDied","Data":"5689f3f1b1d1d84a5456cf90f3358af2de90a04b676e303cb10886f0bdd60290"} Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.682781 4869 scope.go:117] "RemoveContainer" containerID="5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.687601 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jxvhl" podStartSLOduration=7.5193384089999995 podStartE2EDuration="40.687583618s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="2026-03-14 09:19:08.704320963 +0000 UTC m=+1301.676603016" lastFinishedPulling="2026-03-14 09:19:41.872566162 +0000 UTC m=+1334.844848225" observedRunningTime="2026-03-14 09:19:45.68037389 +0000 UTC m=+1338.652655943" watchObservedRunningTime="2026-03-14 09:19:45.687583618 +0000 UTC m=+1338.659865661" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.703900 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=10.870477566 podStartE2EDuration="41.703883651s" podCreationTimestamp="2026-03-14 09:19:04 +0000 UTC" firstStartedPulling="2026-03-14 09:19:07.630530977 +0000 UTC m=+1300.602813030" lastFinishedPulling="2026-03-14 09:19:38.463937062 +0000 UTC m=+1331.436219115" observedRunningTime="2026-03-14 09:19:45.700974589 +0000 UTC m=+1338.673256642" watchObservedRunningTime="2026-03-14 09:19:45.703883651 +0000 UTC m=+1338.676165714" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.719982 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t7bw5" podStartSLOduration=14.719950207 podStartE2EDuration="14.719950207s" podCreationTimestamp="2026-03-14 09:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:45.717667411 +0000 UTC m=+1338.689949464" watchObservedRunningTime="2026-03-14 09:19:45.719950207 +0000 UTC m=+1338.692232280" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.725317 4869 scope.go:117] "RemoveContainer" containerID="88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.731294 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47\": container with ID starting with 88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47 not found: ID does not exist" containerID="88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.731336 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47"} err="failed to get container status \"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47\": rpc error: code = NotFound desc = could not find container \"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47\": container with ID starting with 88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47 not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.731365 4869 scope.go:117] "RemoveContainer" containerID="5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.732029 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96\": container with ID starting with 5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96 not found: ID does not exist" containerID="5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732053 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96"} err="failed to get container status \"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96\": rpc error: code = NotFound desc = could not find container \"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96\": container with ID starting with 5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96 not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732071 4869 scope.go:117] "RemoveContainer" containerID="88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732242 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47"} err="failed to get container status \"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47\": rpc error: code = NotFound desc = could not find container \"88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47\": container with ID starting with 88dc87b7279d7702275e34f497f0215361c95e6872adf562e6d7855bd2c68e47 not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732256 4869 scope.go:117] "RemoveContainer" containerID="5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732420 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96"} err="failed to get container status \"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96\": rpc error: code = NotFound desc = could not find container \"5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96\": container with ID starting with 5f5331ef91f3765872c25dff111d4707f896833f1a6cfd94364371ae2885be96 not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.732436 4869 scope.go:117] "RemoveContainer" containerID="24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.745384 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.759183 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.759686 4869 scope.go:117] "RemoveContainer" containerID="408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.783090 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": dial tcp 10.217.0.170:9322: connect: connection refused" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.783174 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.784600 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.785012 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": dial tcp 10.217.0.170:9322: connect: connection refused" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.814285 4869 scope.go:117] "RemoveContainer" containerID="24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.824645 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f\": container with ID starting with 24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f not found: ID does not exist" containerID="24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.824878 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f"} err="failed to get container status \"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f\": rpc error: code = NotFound desc = could not find container \"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f\": container with ID starting with 24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.824906 4869 scope.go:117] "RemoveContainer" containerID="408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.829885 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda\": container with ID starting with 408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda not found: ID does not exist" containerID="408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.829923 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda"} err="failed to get container status \"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda\": rpc error: code = NotFound desc = could not find container \"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda\": container with ID starting with 408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.829946 4869 scope.go:117] "RemoveContainer" containerID="24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831044 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f"} err="failed to get container status \"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f\": rpc error: code = NotFound desc = could not find container \"24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f\": container with ID starting with 24e07f503c74e815471753317ae36a3e366a78573281f043449fc5de3480441f not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831066 4869 scope.go:117] "RemoveContainer" containerID="408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831114 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.831488 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831703 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.831738 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831747 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.831783 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831792 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: E0314 09:19:45.831807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.831814 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.832046 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.832067 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-httpd" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.832079 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.832092 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="85422929-62ea-468f-8f33-5c663a915aac" containerName="glance-log" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.833052 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda"} err="failed to get container status \"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda\": rpc error: code = NotFound desc = could not find container \"408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda\": container with ID starting with 408d5e7fde12469ae2dd65d073e90559dc2f3ada459c22b7447fa9c78af0bdda not found: ID does not exist" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.833203 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.849574 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.850222 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.850356 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ghmv7" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.850483 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.872795 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.888610 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.895626 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.897192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.901726 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.901747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 09:19:45 crc kubenswrapper[4869]: I0314 09:19:45.904569 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030807 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030836 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030881 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030894 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030910 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030927 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.030952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7fz\" (UniqueName: \"kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.031442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncz6r\" (UniqueName: \"kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.049251 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.161949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162010 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7fz\" (UniqueName: \"kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncz6r\" (UniqueName: \"kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.162378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.163152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.166053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.166799 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.172913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.173621 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.174761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.177087 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.185275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.185908 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.186409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.189218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.189925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.190013 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.192277 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncz6r\" (UniqueName: \"kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.203190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.210438 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7fz\" (UniqueName: \"kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.232657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.234498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.476957 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.532977 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.765304 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"d9350b7741367dcb5bafd619c4222f86bac421890722eaddb9455a7ca317532d"} Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.785252 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerStarted","Data":"f4e07bdc55d0d34423f7552cd97943966eae7eace67df01331f16ec3c7632b39"} Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.785392 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5858c9f6c-clfct" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon-log" containerID="cri-o://0a0656a73f164123e855b5c5c183c393d6c6cc2bea2ca5dcb0de4b3987676f33" gracePeriod=30 Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.785651 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5858c9f6c-clfct" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon" containerID="cri-o://f4e07bdc55d0d34423f7552cd97943966eae7eace67df01331f16ec3c7632b39" gracePeriod=30 Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.798490 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"699930b27e75b4ac2d3a83a8e22e90af1021ec1c8d1dcf16a6d11d5c3b5de617"} Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.813716 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b646449c6-8g8ql" podStartSLOduration=32.813694256 podStartE2EDuration="32.813694256s" podCreationTimestamp="2026-03-14 09:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:46.802057488 +0000 UTC m=+1339.774339541" watchObservedRunningTime="2026-03-14 09:19:46.813694256 +0000 UTC m=+1339.785976309" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.845358 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5858c9f6c-clfct" podStartSLOduration=7.777799505 podStartE2EDuration="38.845332786s" podCreationTimestamp="2026-03-14 09:19:08 +0000 UTC" firstStartedPulling="2026-03-14 09:19:12.93872391 +0000 UTC m=+1305.911005963" lastFinishedPulling="2026-03-14 09:19:44.006257191 +0000 UTC m=+1336.978539244" observedRunningTime="2026-03-14 09:19:46.822992655 +0000 UTC m=+1339.795274718" watchObservedRunningTime="2026-03-14 09:19:46.845332786 +0000 UTC m=+1339.817614849" Mar 14 09:19:46 crc kubenswrapper[4869]: I0314 09:19:46.886690 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9d48d6888-26pm7" podStartSLOduration=32.886614365 podStartE2EDuration="32.886614365s" podCreationTimestamp="2026-03-14 09:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:46.874780503 +0000 UTC m=+1339.847062576" watchObservedRunningTime="2026-03-14 09:19:46.886614365 +0000 UTC m=+1339.858896418" Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.229867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.316326 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:19:47 crc kubenswrapper[4869]: W0314 09:19:47.361569 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4ef0fc1_f98b_4e00_8066_9084f1631bff.slice/crio-1788681cda8f1ccc04000f48a6e14193f8103712628ba1bc048df5ade17ce0b4 WatchSource:0}: Error finding container 1788681cda8f1ccc04000f48a6e14193f8103712628ba1bc048df5ade17ce0b4: Status 404 returned error can't find the container with id 1788681cda8f1ccc04000f48a6e14193f8103712628ba1bc048df5ade17ce0b4 Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.722339 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85422929-62ea-468f-8f33-5c663a915aac" path="/var/lib/kubelet/pods/85422929-62ea-468f-8f33-5c663a915aac/volumes" Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.723560 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b179f677-b76f-4f8c-813c-e80ddbca8632" path="/var/lib/kubelet/pods/b179f677-b76f-4f8c-813c-e80ddbca8632/volumes" Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.850295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerStarted","Data":"1788681cda8f1ccc04000f48a6e14193f8103712628ba1bc048df5ade17ce0b4"} Mar 14 09:19:47 crc kubenswrapper[4869]: I0314 09:19:47.886716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerStarted","Data":"1d46a021f6952a9aa5e81920d76059f069961778a26d5a3e9cd4cdfcac9ec8bb"} Mar 14 09:19:49 crc kubenswrapper[4869]: I0314 09:19:49.476566 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:19:49 crc kubenswrapper[4869]: I0314 09:19:49.886015 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:49 crc kubenswrapper[4869]: I0314 09:19:49.906802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerStarted","Data":"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da"} Mar 14 09:19:49 crc kubenswrapper[4869]: I0314 09:19:49.908312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerStarted","Data":"a11bb73fa16a10f061f68c1e9077ddf948b7cf8de5fc0a18e9d6e9f6f7331ecc"} Mar 14 09:19:50 crc kubenswrapper[4869]: I0314 09:19:50.304596 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 14 09:19:50 crc kubenswrapper[4869]: I0314 09:19:50.782997 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 14 09:19:50 crc kubenswrapper[4869]: I0314 09:19:50.787839 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 14 09:19:50 crc kubenswrapper[4869]: I0314 09:19:50.922658 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.478689 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.544092 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.544317 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="dnsmasq-dns" containerID="cri-o://68d188360ae68bdc5a6ce55a437b895502dbebf1868cad75560ce9c38f419543" gracePeriod=10 Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.935466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerStarted","Data":"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca"} Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.938652 4869 generic.go:334] "Generic (PLEG): container finished" podID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerID="68d188360ae68bdc5a6ce55a437b895502dbebf1868cad75560ce9c38f419543" exitCode=0 Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.938857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" event={"ID":"c6b271a0-998e-46d6-863f-ce41b946c67d","Type":"ContainerDied","Data":"68d188360ae68bdc5a6ce55a437b895502dbebf1868cad75560ce9c38f419543"} Mar 14 09:19:51 crc kubenswrapper[4869]: I0314 09:19:51.977585 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.977559496 podStartE2EDuration="6.977559496s" podCreationTimestamp="2026-03-14 09:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:51.953303137 +0000 UTC m=+1344.925585210" watchObservedRunningTime="2026-03-14 09:19:51.977559496 +0000 UTC m=+1344.949841559" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.106819 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.221129 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.221386 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api-log" containerID="cri-o://55a0385708234f7f5103bc033e67b83693570f78878c6fa43452091b7ab5befd" gracePeriod=30 Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.221597 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" containerID="cri-o://8a942069f0a6d4aa7de9d672d56ea160d9105ed42bfc6bbc897459950bedf3c1" gracePeriod=30 Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.404595 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.404991 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.539481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.539569 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.983316 4869 generic.go:334] "Generic (PLEG): container finished" podID="60619289-ef81-4fff-aacb-066eaa937f4f" containerID="55a0385708234f7f5103bc033e67b83693570f78878c6fa43452091b7ab5befd" exitCode=143 Mar 14 09:19:54 crc kubenswrapper[4869]: I0314 09:19:54.983399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerDied","Data":"55a0385708234f7f5103bc033e67b83693570f78878c6fa43452091b7ab5befd"} Mar 14 09:19:55 crc kubenswrapper[4869]: I0314 09:19:55.304358 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 14 09:19:55 crc kubenswrapper[4869]: I0314 09:19:55.347563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 14 09:19:55 crc kubenswrapper[4869]: I0314 09:19:55.380551 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:55 crc kubenswrapper[4869]: I0314 09:19:55.416133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.001372 4869 generic.go:334] "Generic (PLEG): container finished" podID="ba55bdd0-5e03-45de-820b-59194effebf1" containerID="40ba9f6148f7aaac7f94702a995b632193b84b96097df26f9f59e7d37b78357b" exitCode=0 Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.001455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t7bw5" event={"ID":"ba55bdd0-5e03-45de-820b-59194effebf1","Type":"ContainerDied","Data":"40ba9f6148f7aaac7f94702a995b632193b84b96097df26f9f59e7d37b78357b"} Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.002003 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.035278 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.044269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.071455 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.137430 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.143159 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": read tcp 10.217.0.2:49182->10.217.0.170:9322: read: connection reset by peer" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.143159 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.170:9322/\": read tcp 10.217.0.2:49188->10.217.0.170:9322: read: connection reset by peer" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.533890 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.533972 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.573634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:56 crc kubenswrapper[4869]: I0314 09:19:56.589267 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.019752 4869 generic.go:334] "Generic (PLEG): container finished" podID="60619289-ef81-4fff-aacb-066eaa937f4f" containerID="8a942069f0a6d4aa7de9d672d56ea160d9105ed42bfc6bbc897459950bedf3c1" exitCode=0 Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.019808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerDied","Data":"8a942069f0a6d4aa7de9d672d56ea160d9105ed42bfc6bbc897459950bedf3c1"} Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.023360 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="699930b27e75b4ac2d3a83a8e22e90af1021ec1c8d1dcf16a6d11d5c3b5de617" exitCode=1 Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.024193 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"699930b27e75b4ac2d3a83a8e22e90af1021ec1c8d1dcf16a6d11d5c3b5de617"} Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.024531 4869 scope.go:117] "RemoveContainer" containerID="699930b27e75b4ac2d3a83a8e22e90af1021ec1c8d1dcf16a6d11d5c3b5de617" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.025611 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.025631 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.432837 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557308 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557478 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.557544 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtvgv\" (UniqueName: \"kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv\") pod \"c6b271a0-998e-46d6-863f-ce41b946c67d\" (UID: \"c6b271a0-998e-46d6-863f-ce41b946c67d\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.579678 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv" (OuterVolumeSpecName: "kube-api-access-xtvgv") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "kube-api-access-xtvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.586796 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.596162 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.635988 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.648052 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config" (OuterVolumeSpecName: "config") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.654975 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.655982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.659643 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.659677 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.659692 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.659704 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.659715 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtvgv\" (UniqueName: \"kubernetes.io/projected/c6b271a0-998e-46d6-863f-ce41b946c67d-kube-api-access-xtvgv\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.680474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6b271a0-998e-46d6-863f-ce41b946c67d" (UID: "c6b271a0-998e-46d6-863f-ce41b946c67d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.760626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcr99\" (UniqueName: \"kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761424 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data\") pod \"60619289-ef81-4fff-aacb-066eaa937f4f\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs\") pod \"60619289-ef81-4fff-aacb-066eaa937f4f\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761573 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle\") pod \"60619289-ef81-4fff-aacb-066eaa937f4f\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys\") pod \"ba55bdd0-5e03-45de-820b-59194effebf1\" (UID: \"ba55bdd0-5e03-45de-820b-59194effebf1\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761670 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55lzc\" (UniqueName: \"kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc\") pod \"60619289-ef81-4fff-aacb-066eaa937f4f\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.761694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca\") pod \"60619289-ef81-4fff-aacb-066eaa937f4f\" (UID: \"60619289-ef81-4fff-aacb-066eaa937f4f\") " Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.762290 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b271a0-998e-46d6-863f-ce41b946c67d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.764822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99" (OuterVolumeSpecName: "kube-api-access-tcr99") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "kube-api-access-tcr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.768945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts" (OuterVolumeSpecName: "scripts") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.768948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.771342 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs" (OuterVolumeSpecName: "logs") pod "60619289-ef81-4fff-aacb-066eaa937f4f" (UID: "60619289-ef81-4fff-aacb-066eaa937f4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.793089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc" (OuterVolumeSpecName: "kube-api-access-55lzc") pod "60619289-ef81-4fff-aacb-066eaa937f4f" (UID: "60619289-ef81-4fff-aacb-066eaa937f4f"). InnerVolumeSpecName "kube-api-access-55lzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.796941 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.845156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "60619289-ef81-4fff-aacb-066eaa937f4f" (UID: "60619289-ef81-4fff-aacb-066eaa937f4f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864162 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60619289-ef81-4fff-aacb-066eaa937f4f-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864194 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864205 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55lzc\" (UniqueName: \"kubernetes.io/projected/60619289-ef81-4fff-aacb-066eaa937f4f-kube-api-access-55lzc\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864214 4869 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864223 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcr99\" (UniqueName: \"kubernetes.io/projected/ba55bdd0-5e03-45de-820b-59194effebf1-kube-api-access-tcr99\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864231 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.864238 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.899985 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.900436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data" (OuterVolumeSpecName: "config-data") pod "ba55bdd0-5e03-45de-820b-59194effebf1" (UID: "ba55bdd0-5e03-45de-820b-59194effebf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.930631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60619289-ef81-4fff-aacb-066eaa937f4f" (UID: "60619289-ef81-4fff-aacb-066eaa937f4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.947659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data" (OuterVolumeSpecName: "config-data") pod "60619289-ef81-4fff-aacb-066eaa937f4f" (UID: "60619289-ef81-4fff-aacb-066eaa937f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.965576 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.965604 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.965613 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba55bdd0-5e03-45de-820b-59194effebf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:57 crc kubenswrapper[4869]: I0314 09:19:57.965621 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60619289-ef81-4fff-aacb-066eaa937f4f-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.071286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerStarted","Data":"249f30a57af1b6c0e59d4fc2ba9b67a2dde6dc1cc879dc1600cfef5de142278f"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.087926 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t7bw5" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.089591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t7bw5" event={"ID":"ba55bdd0-5e03-45de-820b-59194effebf1","Type":"ContainerDied","Data":"4696c930477ca0dad718aeff2bfb21ec8121a5b2b6939b58a8e8c2effdc8e4e4"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.089636 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4696c930477ca0dad718aeff2bfb21ec8121a5b2b6939b58a8e8c2effdc8e4e4" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.100998 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" event={"ID":"c6b271a0-998e-46d6-863f-ce41b946c67d","Type":"ContainerDied","Data":"cc05272fa056d4aaf0631c30273b4c5c619a6be58cd35214b2084d778d75eb0e"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.101054 4869 scope.go:117] "RemoveContainer" containerID="68d188360ae68bdc5a6ce55a437b895502dbebf1868cad75560ce9c38f419543" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.101239 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f996c95-j4szb" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.131877 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.131670491 podStartE2EDuration="13.131670491s" podCreationTimestamp="2026-03-14 09:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:19:58.094042792 +0000 UTC m=+1351.066324845" watchObservedRunningTime="2026-03-14 09:19:58.131670491 +0000 UTC m=+1351.103952554" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.173625 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-857df8f9c4-4hrpr"] Mar 14 09:19:58 crc kubenswrapper[4869]: E0314 09:19:58.174027 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api-log" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api-log" Mar 14 09:19:58 crc kubenswrapper[4869]: E0314 09:19:58.174063 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" Mar 14 09:19:58 crc kubenswrapper[4869]: E0314 09:19:58.174084 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="dnsmasq-dns" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174091 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="dnsmasq-dns" Mar 14 09:19:58 crc kubenswrapper[4869]: E0314 09:19:58.174110 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba55bdd0-5e03-45de-820b-59194effebf1" containerName="keystone-bootstrap" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174117 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba55bdd0-5e03-45de-820b-59194effebf1" containerName="keystone-bootstrap" Mar 14 09:19:58 crc kubenswrapper[4869]: E0314 09:19:58.174127 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="init" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174133 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="init" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174308 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api-log" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174322 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" containerName="watcher-api" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174334 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" containerName="dnsmasq-dns" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174347 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba55bdd0-5e03-45de-820b-59194effebf1" containerName="keystone-bootstrap" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.174992 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.182872 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.183041 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.183211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.183325 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk6zl" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.183431 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.183554 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.185557 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857df8f9c4-4hrpr"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.226162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"60619289-ef81-4fff-aacb-066eaa937f4f","Type":"ContainerDied","Data":"14065d056595ef4381c41da8d7e16bd6b59fac0367f4fd0b484d30e169fc0de7"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.226305 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.241974 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.245158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerStarted","Data":"935ea60844a206e8152a8de0ba07bde6ce51760932da8ec64e419d06a379cc2b"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.263469 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f996c95-j4szb"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-combined-ca-bundle\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtmz\" (UniqueName: \"kubernetes.io/projected/ec510507-5c39-486f-839f-501fb07a1d07-kube-api-access-gvtmz\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-scripts\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283848 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-public-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-config-data\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-fernet-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.283966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-credential-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.285229 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-internal-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.296118 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerName="watcher-applier" containerID="cri-o://82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" gracePeriod=30 Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.296299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e"} Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.298777 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" containerName="watcher-decision-engine" containerID="cri-o://1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433" gracePeriod=30 Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.355814 4869 scope.go:117] "RemoveContainer" containerID="981d515873fb318397f658065fcc33ac3a8be9c9ca1c79c1bc0ef23fd1eebdc7" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.386831 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-config-data\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.386906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-fernet-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.386972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-credential-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.387019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-internal-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.387111 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-combined-ca-bundle\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.387135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtmz\" (UniqueName: \"kubernetes.io/projected/ec510507-5c39-486f-839f-501fb07a1d07-kube-api-access-gvtmz\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.387221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-scripts\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.387305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-public-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.393126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-internal-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.393799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-fernet-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.394020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-credential-keys\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.394392 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-combined-ca-bundle\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.394864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-scripts\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.395934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-config-data\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.396102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec510507-5c39-486f-839f-501fb07a1d07-public-tls-certs\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.410031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtmz\" (UniqueName: \"kubernetes.io/projected/ec510507-5c39-486f-839f-501fb07a1d07-kube-api-access-gvtmz\") pod \"keystone-857df8f9c4-4hrpr\" (UID: \"ec510507-5c39-486f-839f-501fb07a1d07\") " pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.934897 4869 scope.go:117] "RemoveContainer" containerID="8a942069f0a6d4aa7de9d672d56ea160d9105ed42bfc6bbc897459950bedf3c1" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.959426 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.969115 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.980483 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.982558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.986762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.987248 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Mar 14 09:19:58 crc kubenswrapper[4869]: I0314 09:19:58.987568 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.013817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.041217 4869 scope.go:117] "RemoveContainer" containerID="55a0385708234f7f5103bc033e67b83693570f78878c6fa43452091b7ab5befd" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.044347 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-config-data\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100498 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wm4\" (UniqueName: \"kubernetes.io/projected/781f0f92-429c-4028-8617-3c5249f510bd-kube-api-access-46wm4\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781f0f92-429c-4028-8617-3c5249f510bd-logs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.100759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-config-data\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204614 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46wm4\" (UniqueName: \"kubernetes.io/projected/781f0f92-429c-4028-8617-3c5249f510bd-kube-api-access-46wm4\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.204726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781f0f92-429c-4028-8617-3c5249f510bd-logs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.205591 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/781f0f92-429c-4028-8617-3c5249f510bd-logs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.220206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.227058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.228893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-config-data\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.229434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.235061 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46wm4\" (UniqueName: \"kubernetes.io/projected/781f0f92-429c-4028-8617-3c5249f510bd-kube-api-access-46wm4\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.238026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/781f0f92-429c-4028-8617-3c5249f510bd-public-tls-certs\") pod \"watcher-api-0\" (UID: \"781f0f92-429c-4028-8617-3c5249f510bd\") " pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.322392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jmtdx" event={"ID":"5806f1f4-83ae-4f76-ba42-f4943cbef129","Type":"ContainerStarted","Data":"16f66cc143b425762b7476c0fbcc17d5bb966de3b31c9ed3a53bff59927136da"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.337972 4869 generic.go:334] "Generic (PLEG): container finished" podID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerID="f4e07bdc55d0d34423f7552cd97943966eae7eace67df01331f16ec3c7632b39" exitCode=1 Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.338046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerDied","Data":"f4e07bdc55d0d34423f7552cd97943966eae7eace67df01331f16ec3c7632b39"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.349731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.379159 4869 generic.go:334] "Generic (PLEG): container finished" podID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerID="050e858f30980bf63823f13a4d44bbaa98479252cd1164bba19f482f360487aa" exitCode=1 Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.379242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerDied","Data":"050e858f30980bf63823f13a4d44bbaa98479252cd1164bba19f482f360487aa"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.386811 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.413331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca\") pod \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.415977 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs\") pod \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.416108 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5nj\" (UniqueName: \"kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj\") pod \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.416143 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data\") pod \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.416218 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle\") pod \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\" (UID: \"c1ffadc1-b64b-4763-a8b9-b5047caf3166\") " Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.417159 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jmtdx" podStartSLOduration=4.966617101 podStartE2EDuration="54.41713813s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="2026-03-14 09:19:07.628923298 +0000 UTC m=+1300.601205351" lastFinishedPulling="2026-03-14 09:19:57.079444327 +0000 UTC m=+1350.051726380" observedRunningTime="2026-03-14 09:19:59.349308356 +0000 UTC m=+1352.321590409" watchObservedRunningTime="2026-03-14 09:19:59.41713813 +0000 UTC m=+1352.389420183" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.421032 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kjlv8" event={"ID":"34747e66-40bd-4676-9d8e-673fb09120c0","Type":"ContainerStarted","Data":"b227489196ee453d04f492511a870f0d07f537918a1944c03cf846716041d934"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.432838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs" (OuterVolumeSpecName: "logs") pod "c1ffadc1-b64b-4763-a8b9-b5047caf3166" (UID: "c1ffadc1-b64b-4763-a8b9-b5047caf3166"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.443738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj" (OuterVolumeSpecName: "kube-api-access-mv5nj") pod "c1ffadc1-b64b-4763-a8b9-b5047caf3166" (UID: "c1ffadc1-b64b-4763-a8b9-b5047caf3166"). InnerVolumeSpecName "kube-api-access-mv5nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.471985 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" containerID="1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433" exitCode=1 Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.472087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1ffadc1-b64b-4763-a8b9-b5047caf3166","Type":"ContainerDied","Data":"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.472124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1ffadc1-b64b-4763-a8b9-b5047caf3166","Type":"ContainerDied","Data":"8508764cf525b76ec32be94f08a8c1ba879cd71a851ca511ae3e5017971d0c8d"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.472145 4869 scope.go:117] "RemoveContainer" containerID="1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.472257 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.478109 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1ffadc1-b64b-4763-a8b9-b5047caf3166" (UID: "c1ffadc1-b64b-4763-a8b9-b5047caf3166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.480701 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kjlv8" podStartSLOduration=4.454797362 podStartE2EDuration="54.480680057s" podCreationTimestamp="2026-03-14 09:19:05 +0000 UTC" firstStartedPulling="2026-03-14 09:19:07.828542004 +0000 UTC m=+1300.800824047" lastFinishedPulling="2026-03-14 09:19:57.854424699 +0000 UTC m=+1350.826706742" observedRunningTime="2026-03-14 09:19:59.465413302 +0000 UTC m=+1352.437695355" watchObservedRunningTime="2026-03-14 09:19:59.480680057 +0000 UTC m=+1352.452962110" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.516493 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c1ffadc1-b64b-4763-a8b9-b5047caf3166" (UID: "c1ffadc1-b64b-4763-a8b9-b5047caf3166"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.519840 4869 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.519869 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1ffadc1-b64b-4763-a8b9-b5047caf3166-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.519881 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5nj\" (UniqueName: \"kubernetes.io/projected/c1ffadc1-b64b-4763-a8b9-b5047caf3166-kube-api-access-mv5nj\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.519892 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.527785 4869 generic.go:334] "Generic (PLEG): container finished" podID="e612c02e-1383-4a14-9267-e1742cb95cc7" containerID="666c3fcdee0f3d60ce35f3dd71b484228a4d2c0c4b433f74a33e9da8a140605f" exitCode=0 Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.527991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jxvhl" event={"ID":"e612c02e-1383-4a14-9267-e1742cb95cc7","Type":"ContainerDied","Data":"666c3fcdee0f3d60ce35f3dd71b484228a4d2c0c4b433f74a33e9da8a140605f"} Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.528627 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.528653 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.547095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data" (OuterVolumeSpecName: "config-data") pod "c1ffadc1-b64b-4763-a8b9-b5047caf3166" (UID: "c1ffadc1-b64b-4763-a8b9-b5047caf3166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.566824 4869 scope.go:117] "RemoveContainer" containerID="1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433" Mar 14 09:19:59 crc kubenswrapper[4869]: E0314 09:19:59.584025 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433\": container with ID starting with 1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433 not found: ID does not exist" containerID="1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.584110 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433"} err="failed to get container status \"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433\": rpc error: code = NotFound desc = could not find container \"1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433\": container with ID starting with 1e3cb8b4f4a14360f9032ada6c19d083381811cd5288b079898940e58cf53433 not found: ID does not exist" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.624097 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ffadc1-b64b-4763-a8b9-b5047caf3166-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.736142 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60619289-ef81-4fff-aacb-066eaa937f4f" path="/var/lib/kubelet/pods/60619289-ef81-4fff-aacb-066eaa937f4f/volumes" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.741169 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b271a0-998e-46d6-863f-ce41b946c67d" path="/var/lib/kubelet/pods/c6b271a0-998e-46d6-863f-ce41b946c67d/volumes" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.768200 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857df8f9c4-4hrpr"] Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.840409 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.863882 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.883755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:59 crc kubenswrapper[4869]: E0314 09:19:59.884288 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" containerName="watcher-decision-engine" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.884309 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" containerName="watcher-decision-engine" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.893621 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" containerName="watcher-decision-engine" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.894796 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.899132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.919052 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.933025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.933125 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.933193 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.933245 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5jn9\" (UniqueName: \"kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:19:59 crc kubenswrapper[4869]: I0314 09:19:59.933266 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.037779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.038689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5jn9\" (UniqueName: \"kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.038724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.038862 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.038984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.039414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.043751 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.044216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.047187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.047631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.058398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5jn9\" (UniqueName: \"kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9\") pod \"watcher-decision-engine-0\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: W0314 09:20:00.061295 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod781f0f92_429c_4028_8617_3c5249f510bd.slice/crio-06fe2165358d6c64ff1193508c036c5c56d999bc1248de1b147188554d60f208 WatchSource:0}: Error finding container 06fe2165358d6c64ff1193508c036c5c56d999bc1248de1b147188554d60f208: Status 404 returned error can't find the container with id 06fe2165358d6c64ff1193508c036c5c56d999bc1248de1b147188554d60f208 Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.147384 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558000-dzfv4"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.149161 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.151573 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.152077 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.152240 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.172241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-dzfv4"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.236708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.242394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77xp\" (UniqueName: \"kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp\") pod \"auto-csr-approver-29558000-dzfv4\" (UID: \"98d5ea86-e6ae-43ec-acd0-8123f0a60d87\") " pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.310268 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 is running failed: container process not found" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.311026 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 is running failed: container process not found" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.311399 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 is running failed: container process not found" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.311433 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerName="watcher-applier" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.344738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77xp\" (UniqueName: \"kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp\") pod \"auto-csr-approver-29558000-dzfv4\" (UID: \"98d5ea86-e6ae-43ec-acd0-8123f0a60d87\") " pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.359831 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.373681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77xp\" (UniqueName: \"kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp\") pod \"auto-csr-approver-29558000-dzfv4\" (UID: \"98d5ea86-e6ae-43ec-acd0-8123f0a60d87\") " pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.446115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data\") pod \"6e1103e5-8974-4f6f-8240-9f000114e32b\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.447637 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle\") pod \"6e1103e5-8974-4f6f-8240-9f000114e32b\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.448802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs\") pod \"6e1103e5-8974-4f6f-8240-9f000114e32b\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.449039 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44mxn\" (UniqueName: \"kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn\") pod \"6e1103e5-8974-4f6f-8240-9f000114e32b\" (UID: \"6e1103e5-8974-4f6f-8240-9f000114e32b\") " Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.453644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn" (OuterVolumeSpecName: "kube-api-access-44mxn") pod "6e1103e5-8974-4f6f-8240-9f000114e32b" (UID: "6e1103e5-8974-4f6f-8240-9f000114e32b"). InnerVolumeSpecName "kube-api-access-44mxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.453840 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs" (OuterVolumeSpecName: "logs") pod "6e1103e5-8974-4f6f-8240-9f000114e32b" (UID: "6e1103e5-8974-4f6f-8240-9f000114e32b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.532349 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e1103e5-8974-4f6f-8240-9f000114e32b" (UID: "6e1103e5-8974-4f6f-8240-9f000114e32b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.549832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"781f0f92-429c-4028-8617-3c5249f510bd","Type":"ContainerStarted","Data":"e678678cecbe1e3b2bc6aa1ad19ec760ae4f34b3b34f3a636d08f98ecaa11a07"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.549880 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"781f0f92-429c-4028-8617-3c5249f510bd","Type":"ContainerStarted","Data":"06fe2165358d6c64ff1193508c036c5c56d999bc1248de1b147188554d60f208"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.549945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data" (OuterVolumeSpecName: "config-data") pod "6e1103e5-8974-4f6f-8240-9f000114e32b" (UID: "6e1103e5-8974-4f6f-8240-9f000114e32b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.554926 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="d9350b7741367dcb5bafd619c4222f86bac421890722eaddb9455a7ca317532d" exitCode=1 Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.554988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"d9350b7741367dcb5bafd619c4222f86bac421890722eaddb9455a7ca317532d"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.555757 4869 scope.go:117] "RemoveContainer" containerID="d9350b7741367dcb5bafd619c4222f86bac421890722eaddb9455a7ca317532d" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.558240 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1103e5-8974-4f6f-8240-9f000114e32b-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.558274 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44mxn\" (UniqueName: \"kubernetes.io/projected/6e1103e5-8974-4f6f-8240-9f000114e32b-kube-api-access-44mxn\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.558284 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.558294 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1103e5-8974-4f6f-8240-9f000114e32b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.562989 4869 generic.go:334] "Generic (PLEG): container finished" podID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" exitCode=0 Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.563065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6e1103e5-8974-4f6f-8240-9f000114e32b","Type":"ContainerDied","Data":"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.563100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6e1103e5-8974-4f6f-8240-9f000114e32b","Type":"ContainerDied","Data":"ed3a9bb658aa87b6409aa7e968e3388e63d40bd79c092433d49bf431410bb35e"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.563121 4869 scope.go:117] "RemoveContainer" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.563267 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.576396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857df8f9c4-4hrpr" event={"ID":"ec510507-5c39-486f-839f-501fb07a1d07","Type":"ContainerStarted","Data":"5e62053a2aac02c08998086ea5b25c6b1a941b8568d7463285dbd925ca9a336c"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.576658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857df8f9c4-4hrpr" event={"ID":"ec510507-5c39-486f-839f-501fb07a1d07","Type":"ContainerStarted","Data":"32ee0eab24ef552d7cf819ee656adf5d5106e237a21cb78be4175b526a00b48a"} Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.576788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.578741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.619294 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-857df8f9c4-4hrpr" podStartSLOduration=2.619270443 podStartE2EDuration="2.619270443s" podCreationTimestamp="2026-03-14 09:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:00.610311872 +0000 UTC m=+1353.582593935" watchObservedRunningTime="2026-03-14 09:20:00.619270443 +0000 UTC m=+1353.591552496" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.685779 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.708869 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.723065 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.724259 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerName="watcher-applier" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.724281 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerName="watcher-applier" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.724555 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" containerName="watcher-applier" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.725853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.733705 4869 scope.go:117] "RemoveContainer" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.733974 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.735280 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: E0314 09:20:00.737446 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2\": container with ID starting with 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 not found: ID does not exist" containerID="82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.737529 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2"} err="failed to get container status \"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2\": rpc error: code = NotFound desc = could not find container \"82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2\": container with ID starting with 82c01327d1e625daf44c1667acff22a7bedece6053fee28242d55491bb84c9f2 not found: ID does not exist" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.767976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxxm\" (UniqueName: \"kubernetes.io/projected/28596820-2a8d-4347-afec-5e32a58a0398-kube-api-access-5fxxm\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.768098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-config-data\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.768148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28596820-2a8d-4347-afec-5e32a58a0398-logs\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.768198 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.865804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.869890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-config-data\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.869962 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28596820-2a8d-4347-afec-5e32a58a0398-logs\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.870019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.870108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxxm\" (UniqueName: \"kubernetes.io/projected/28596820-2a8d-4347-afec-5e32a58a0398-kube-api-access-5fxxm\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.872921 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28596820-2a8d-4347-afec-5e32a58a0398-logs\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.889019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.897099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28596820-2a8d-4347-afec-5e32a58a0398-config-data\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:00 crc kubenswrapper[4869]: I0314 09:20:00.905914 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxxm\" (UniqueName: \"kubernetes.io/projected/28596820-2a8d-4347-afec-5e32a58a0398-kube-api-access-5fxxm\") pod \"watcher-applier-0\" (UID: \"28596820-2a8d-4347-afec-5e32a58a0398\") " pod="openstack/watcher-applier-0" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.118258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.374949 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-dzfv4"] Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.389555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jxvhl" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.505708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkrgb\" (UniqueName: \"kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb\") pod \"e612c02e-1383-4a14-9267-e1742cb95cc7\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.506127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data\") pod \"e612c02e-1383-4a14-9267-e1742cb95cc7\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.506216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle\") pod \"e612c02e-1383-4a14-9267-e1742cb95cc7\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.506322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts\") pod \"e612c02e-1383-4a14-9267-e1742cb95cc7\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.506403 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs\") pod \"e612c02e-1383-4a14-9267-e1742cb95cc7\" (UID: \"e612c02e-1383-4a14-9267-e1742cb95cc7\") " Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.509117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs" (OuterVolumeSpecName: "logs") pod "e612c02e-1383-4a14-9267-e1742cb95cc7" (UID: "e612c02e-1383-4a14-9267-e1742cb95cc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.516942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts" (OuterVolumeSpecName: "scripts") pod "e612c02e-1383-4a14-9267-e1742cb95cc7" (UID: "e612c02e-1383-4a14-9267-e1742cb95cc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.523768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb" (OuterVolumeSpecName: "kube-api-access-gkrgb") pod "e612c02e-1383-4a14-9267-e1742cb95cc7" (UID: "e612c02e-1383-4a14-9267-e1742cb95cc7"). InnerVolumeSpecName "kube-api-access-gkrgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.551614 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e612c02e-1383-4a14-9267-e1742cb95cc7" (UID: "e612c02e-1383-4a14-9267-e1742cb95cc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.562770 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data" (OuterVolumeSpecName: "config-data") pod "e612c02e-1383-4a14-9267-e1742cb95cc7" (UID: "e612c02e-1383-4a14-9267-e1742cb95cc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.610783 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkrgb\" (UniqueName: \"kubernetes.io/projected/e612c02e-1383-4a14-9267-e1742cb95cc7-kube-api-access-gkrgb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.610814 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.610824 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.610832 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e612c02e-1383-4a14-9267-e1742cb95cc7-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.610841 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e612c02e-1383-4a14-9267-e1742cb95cc7-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.641350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" event={"ID":"98d5ea86-e6ae-43ec-acd0-8123f0a60d87","Type":"ContainerStarted","Data":"62689c7a87379db5c08a5dca718c355129da89e1bb614ed2af0751ae01ef9de1"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.649749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jxvhl" event={"ID":"e612c02e-1383-4a14-9267-e1742cb95cc7","Type":"ContainerDied","Data":"9db6936d5a2a8785e294d2918964ca325b11d2939cc302e03aa58d1620894cad"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.650113 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db6936d5a2a8785e294d2918964ca325b11d2939cc302e03aa58d1620894cad" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.650028 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jxvhl" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.657060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.681134 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerStarted","Data":"8d57ba50d496baf7056dfcaa55cade1e2b3b21f4bf62759e539fb36e9105bb85"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.681209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerStarted","Data":"df6f23c2660847a0fd67d395c3708f5b0327b6f40676e35c662a3ee9d4cb0ee7"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.696884 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"781f0f92-429c-4028-8617-3c5249f510bd","Type":"ContainerStarted","Data":"7db19dbc839cc3e4f0f28259f73a556584af4c6101cda73d36b788f23d2b4bda"} Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.697347 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.746981 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.746960299 podStartE2EDuration="2.746960299s" podCreationTimestamp="2026-03-14 09:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:01.721744217 +0000 UTC m=+1354.694026280" watchObservedRunningTime="2026-03-14 09:20:01.746960299 +0000 UTC m=+1354.719242352" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.750714 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1103e5-8974-4f6f-8240-9f000114e32b" path="/var/lib/kubelet/pods/6e1103e5-8974-4f6f-8240-9f000114e32b/volumes" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.754220 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ffadc1-b64b-4763-a8b9-b5047caf3166" path="/var/lib/kubelet/pods/c1ffadc1-b64b-4763-a8b9-b5047caf3166/volumes" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.813083 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6966c9cd66-p4jg9"] Mar 14 09:20:01 crc kubenswrapper[4869]: E0314 09:20:01.813793 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e612c02e-1383-4a14-9267-e1742cb95cc7" containerName="placement-db-sync" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.813892 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e612c02e-1383-4a14-9267-e1742cb95cc7" containerName="placement-db-sync" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.814195 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e612c02e-1383-4a14-9267-e1742cb95cc7" containerName="placement-db-sync" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.816897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.820773 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.821037 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.821208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.821382 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-n6688" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.822215 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.831888 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.831868414 podStartE2EDuration="3.831868414s" podCreationTimestamp="2026-03-14 09:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:01.749663556 +0000 UTC m=+1354.721945629" watchObservedRunningTime="2026-03-14 09:20:01.831868414 +0000 UTC m=+1354.804150467" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.832875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6966c9cd66-p4jg9"] Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.851652 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.916943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-internal-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-combined-ca-bundle\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-config-data\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917112 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-public-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p2d\" (UniqueName: \"kubernetes.io/projected/e5be45b6-5241-4347-b552-b1dc75178894-kube-api-access-g4p2d\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be45b6-5241-4347-b552-b1dc75178894-logs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:01 crc kubenswrapper[4869]: I0314 09:20:01.917293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-scripts\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-public-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p2d\" (UniqueName: \"kubernetes.io/projected/e5be45b6-5241-4347-b552-b1dc75178894-kube-api-access-g4p2d\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be45b6-5241-4347-b552-b1dc75178894-logs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-scripts\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-internal-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-combined-ca-bundle\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.019723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-config-data\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.020731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be45b6-5241-4347-b552-b1dc75178894-logs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.027142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-config-data\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.027365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-combined-ca-bundle\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.030264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-public-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.040292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-scripts\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.043025 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5be45b6-5241-4347-b552-b1dc75178894-internal-tls-certs\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.044328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p2d\" (UniqueName: \"kubernetes.io/projected/e5be45b6-5241-4347-b552-b1dc75178894-kube-api-access-g4p2d\") pod \"placement-6966c9cd66-p4jg9\" (UID: \"e5be45b6-5241-4347-b552-b1dc75178894\") " pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.149876 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.551646 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.552095 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.554140 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.712563 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6966c9cd66-p4jg9"] Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.715086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"28596820-2a8d-4347-afec-5e32a58a0398","Type":"ContainerStarted","Data":"2a48d1b5f6c6f1912a5198f93554f4b791d7cc5e76226cd9c0c59a9e7370eca5"} Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.715129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"28596820-2a8d-4347-afec-5e32a58a0398","Type":"ContainerStarted","Data":"577db9c316b01f9820190860d6d1a173bfded2e9519bdad832fa125caaed8b35"} Mar 14 09:20:02 crc kubenswrapper[4869]: W0314 09:20:02.724690 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5be45b6_5241_4347_b552_b1dc75178894.slice/crio-664e8056d5a4d9c125b2f8dc20bd0bdc6b80408e76cbcd3420cc7ec7b61824c9 WatchSource:0}: Error finding container 664e8056d5a4d9c125b2f8dc20bd0bdc6b80408e76cbcd3420cc7ec7b61824c9: Status 404 returned error can't find the container with id 664e8056d5a4d9c125b2f8dc20bd0bdc6b80408e76cbcd3420cc7ec7b61824c9 Mar 14 09:20:02 crc kubenswrapper[4869]: I0314 09:20:02.735076 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.735058701 podStartE2EDuration="2.735058701s" podCreationTimestamp="2026-03-14 09:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:02.733881182 +0000 UTC m=+1355.706163245" watchObservedRunningTime="2026-03-14 09:20:02.735058701 +0000 UTC m=+1355.707340764" Mar 14 09:20:03 crc kubenswrapper[4869]: I0314 09:20:03.739153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6966c9cd66-p4jg9" event={"ID":"e5be45b6-5241-4347-b552-b1dc75178894","Type":"ContainerStarted","Data":"9b1a8f30cf070315fc527ab6e82a618f75b7d4a2d0ba5eb12fad842813adde1b"} Mar 14 09:20:03 crc kubenswrapper[4869]: I0314 09:20:03.739584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6966c9cd66-p4jg9" event={"ID":"e5be45b6-5241-4347-b552-b1dc75178894","Type":"ContainerStarted","Data":"664e8056d5a4d9c125b2f8dc20bd0bdc6b80408e76cbcd3420cc7ec7b61824c9"} Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.350861 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.350959 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.405314 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.405374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.538823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.538931 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.755912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6966c9cd66-p4jg9" event={"ID":"e5be45b6-5241-4347-b552-b1dc75178894","Type":"ContainerStarted","Data":"14f97e43dc6535ba1b98d4c4f9613302486612b0a36ca0ccac0e181a17b6c1d9"} Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.756267 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.756287 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.797538 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6966c9cd66-p4jg9" podStartSLOduration=3.797494652 podStartE2EDuration="3.797494652s" podCreationTimestamp="2026-03-14 09:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:04.775034288 +0000 UTC m=+1357.747316351" watchObservedRunningTime="2026-03-14 09:20:04.797494652 +0000 UTC m=+1357.769776725" Mar 14 09:20:04 crc kubenswrapper[4869]: I0314 09:20:04.799463 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:20:05 crc kubenswrapper[4869]: I0314 09:20:05.769424 4869 generic.go:334] "Generic (PLEG): container finished" podID="98d5ea86-e6ae-43ec-acd0-8123f0a60d87" containerID="59357f3796fae4d201fbab2e627c8f81c9b517d99abd50f3c3c87f71d015b2c2" exitCode=0 Mar 14 09:20:05 crc kubenswrapper[4869]: I0314 09:20:05.769748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" event={"ID":"98d5ea86-e6ae-43ec-acd0-8123f0a60d87","Type":"ContainerDied","Data":"59357f3796fae4d201fbab2e627c8f81c9b517d99abd50f3c3c87f71d015b2c2"} Mar 14 09:20:05 crc kubenswrapper[4869]: I0314 09:20:05.773597 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e0825fa-2453-46a0-b677-79808694bba8" containerID="8d57ba50d496baf7056dfcaa55cade1e2b3b21f4bf62759e539fb36e9105bb85" exitCode=1 Mar 14 09:20:05 crc kubenswrapper[4869]: I0314 09:20:05.773678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerDied","Data":"8d57ba50d496baf7056dfcaa55cade1e2b3b21f4bf62759e539fb36e9105bb85"} Mar 14 09:20:05 crc kubenswrapper[4869]: I0314 09:20:05.774647 4869 scope.go:117] "RemoveContainer" containerID="8d57ba50d496baf7056dfcaa55cade1e2b3b21f4bf62759e539fb36e9105bb85" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.120241 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.477461 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.477529 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.518238 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.533385 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.782392 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:20:06 crc kubenswrapper[4869]: I0314 09:20:06.782726 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.057221 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.057722 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.059745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.351296 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.364916 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.605133 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.605194 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.822441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" event={"ID":"98d5ea86-e6ae-43ec-acd0-8123f0a60d87","Type":"ContainerDied","Data":"62689c7a87379db5c08a5dca718c355129da89e1bb614ed2af0751ae01ef9de1"} Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.822493 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62689c7a87379db5c08a5dca718c355129da89e1bb614ed2af0751ae01ef9de1" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.830040 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e" exitCode=1 Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.831459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e"} Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.831526 4869 scope.go:117] "RemoveContainer" containerID="699930b27e75b4ac2d3a83a8e22e90af1021ec1c8d1dcf16a6d11d5c3b5de617" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.832873 4869 scope.go:117] "RemoveContainer" containerID="3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e" Mar 14 09:20:09 crc kubenswrapper[4869]: E0314 09:20:09.833116 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.845739 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Mar 14 09:20:09 crc kubenswrapper[4869]: I0314 09:20:09.908907 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.023471 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j77xp\" (UniqueName: \"kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp\") pod \"98d5ea86-e6ae-43ec-acd0-8123f0a60d87\" (UID: \"98d5ea86-e6ae-43ec-acd0-8123f0a60d87\") " Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.034617 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp" (OuterVolumeSpecName: "kube-api-access-j77xp") pod "98d5ea86-e6ae-43ec-acd0-8123f0a60d87" (UID: "98d5ea86-e6ae-43ec-acd0-8123f0a60d87"). InnerVolumeSpecName "kube-api-access-j77xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.126231 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j77xp\" (UniqueName: \"kubernetes.io/projected/98d5ea86-e6ae-43ec-acd0-8123f0a60d87-kube-api-access-j77xp\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.237617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.237666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.840153 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558000-dzfv4" Mar 14 09:20:10 crc kubenswrapper[4869]: I0314 09:20:10.994608 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-7hfhb"] Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.003535 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557994-7hfhb"] Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.120004 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.150880 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.718367 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c2f58d-4863-43b4-b4ec-d839270ade42" path="/var/lib/kubelet/pods/75c2f58d-4863-43b4-b4ec-d839270ade42/volumes" Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.852678 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3" exitCode=1 Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.852756 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3"} Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.853063 4869 scope.go:117] "RemoveContainer" containerID="d9350b7741367dcb5bafd619c4222f86bac421890722eaddb9455a7ca317532d" Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.854200 4869 scope.go:117] "RemoveContainer" containerID="6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3" Mar 14 09:20:11 crc kubenswrapper[4869]: E0314 09:20:11.854404 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:20:11 crc kubenswrapper[4869]: I0314 09:20:11.895218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Mar 14 09:20:12 crc kubenswrapper[4869]: E0314 09:20:12.076105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.868800 4869 generic.go:334] "Generic (PLEG): container finished" podID="34747e66-40bd-4676-9d8e-673fb09120c0" containerID="b227489196ee453d04f492511a870f0d07f537918a1944c03cf846716041d934" exitCode=0 Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.868868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kjlv8" event={"ID":"34747e66-40bd-4676-9d8e-673fb09120c0","Type":"ContainerDied","Data":"b227489196ee453d04f492511a870f0d07f537918a1944c03cf846716041d934"} Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.880577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerStarted","Data":"8476a8365f20ad972c20819eba6371c994ac8e485ca41da57441a4b70535e76a"} Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.880735 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="ceilometer-notification-agent" containerID="cri-o://d9439a2f209adf9d7b1d2bb9a0a3cff8f81229588faa1acc51d30baebcee1776" gracePeriod=30 Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.881152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.881207 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="proxy-httpd" containerID="cri-o://8476a8365f20ad972c20819eba6371c994ac8e485ca41da57441a4b70535e76a" gracePeriod=30 Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.881313 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="sg-core" containerID="cri-o://935ea60844a206e8152a8de0ba07bde6ce51760932da8ec64e419d06a379cc2b" gracePeriod=30 Mar 14 09:20:12 crc kubenswrapper[4869]: I0314 09:20:12.889575 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerStarted","Data":"a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124"} Mar 14 09:20:13 crc kubenswrapper[4869]: I0314 09:20:13.914112 4869 generic.go:334] "Generic (PLEG): container finished" podID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerID="8476a8365f20ad972c20819eba6371c994ac8e485ca41da57441a4b70535e76a" exitCode=0 Mar 14 09:20:13 crc kubenswrapper[4869]: I0314 09:20:13.914672 4869 generic.go:334] "Generic (PLEG): container finished" podID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerID="935ea60844a206e8152a8de0ba07bde6ce51760932da8ec64e419d06a379cc2b" exitCode=2 Mar 14 09:20:13 crc kubenswrapper[4869]: I0314 09:20:13.914809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerDied","Data":"8476a8365f20ad972c20819eba6371c994ac8e485ca41da57441a4b70535e76a"} Mar 14 09:20:13 crc kubenswrapper[4869]: I0314 09:20:13.914834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerDied","Data":"935ea60844a206e8152a8de0ba07bde6ce51760932da8ec64e419d06a379cc2b"} Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.300530 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.404384 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.404740 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.405646 4869 scope.go:117] "RemoveContainer" containerID="6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3" Mar 14 09:20:14 crc kubenswrapper[4869]: E0314 09:20:14.406018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.412347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle\") pod \"34747e66-40bd-4676-9d8e-673fb09120c0\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.412532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data\") pod \"34747e66-40bd-4676-9d8e-673fb09120c0\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.412584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdcw7\" (UniqueName: \"kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7\") pod \"34747e66-40bd-4676-9d8e-673fb09120c0\" (UID: \"34747e66-40bd-4676-9d8e-673fb09120c0\") " Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.419177 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7" (OuterVolumeSpecName: "kube-api-access-cdcw7") pod "34747e66-40bd-4676-9d8e-673fb09120c0" (UID: "34747e66-40bd-4676-9d8e-673fb09120c0"). InnerVolumeSpecName "kube-api-access-cdcw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.427680 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "34747e66-40bd-4676-9d8e-673fb09120c0" (UID: "34747e66-40bd-4676-9d8e-673fb09120c0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.448383 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34747e66-40bd-4676-9d8e-673fb09120c0" (UID: "34747e66-40bd-4676-9d8e-673fb09120c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.514853 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.514883 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/34747e66-40bd-4676-9d8e-673fb09120c0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.514892 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdcw7\" (UniqueName: \"kubernetes.io/projected/34747e66-40bd-4676-9d8e-673fb09120c0-kube-api-access-cdcw7\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.538615 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.538665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.539402 4869 scope.go:117] "RemoveContainer" containerID="3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e" Mar 14 09:20:14 crc kubenswrapper[4869]: E0314 09:20:14.539666 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 10s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.929194 4869 generic.go:334] "Generic (PLEG): container finished" podID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerID="d9439a2f209adf9d7b1d2bb9a0a3cff8f81229588faa1acc51d30baebcee1776" exitCode=0 Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.929278 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerDied","Data":"d9439a2f209adf9d7b1d2bb9a0a3cff8f81229588faa1acc51d30baebcee1776"} Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.930845 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kjlv8" event={"ID":"34747e66-40bd-4676-9d8e-673fb09120c0","Type":"ContainerDied","Data":"1f63c43ebe907e6da699558c0a21de5cd27097e32487b94dcfb1a1ba21032c83"} Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.930872 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f63c43ebe907e6da699558c0a21de5cd27097e32487b94dcfb1a1ba21032c83" Mar 14 09:20:14 crc kubenswrapper[4869]: I0314 09:20:14.930952 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kjlv8" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.192005 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-659c5f77bf-p8tvx"] Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.192940 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d5ea86-e6ae-43ec-acd0-8123f0a60d87" containerName="oc" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.192969 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d5ea86-e6ae-43ec-acd0-8123f0a60d87" containerName="oc" Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.193027 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" containerName="barbican-db-sync" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.193039 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" containerName="barbican-db-sync" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.193280 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" containerName="barbican-db-sync" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.193322 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d5ea86-e6ae-43ec-acd0-8123f0a60d87" containerName="oc" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.194618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.200733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cz7xs" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.201835 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.204687 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.231354 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-659c5f77bf-p8tvx"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.259455 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-768f98d44b-nmkh7"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.261386 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.264280 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.270324 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-768f98d44b-nmkh7"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.309615 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.311187 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.327615 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.337355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data-custom\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.337463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.337488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.337575 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-combined-ca-bundle\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.352606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-combined-ca-bundle\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.352736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znvsp\" (UniqueName: \"kubernetes.io/projected/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-kube-api-access-znvsp\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.352795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data-custom\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.352893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-logs\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.352964 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw96s\" (UniqueName: \"kubernetes.io/projected/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-kube-api-access-kw96s\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.353094 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-logs\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.370354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.426372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.426966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="ceilometer-notification-agent" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.426989 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="ceilometer-notification-agent" Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.427000 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="proxy-httpd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.427007 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="proxy-httpd" Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.427024 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="sg-core" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.427031 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="sg-core" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.427257 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="proxy-httpd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.427281 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="ceilometer-notification-agent" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.427289 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" containerName="sg-core" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.428315 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.430626 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.436482 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-combined-ca-bundle\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466204 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znvsp\" (UniqueName: \"kubernetes.io/projected/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-kube-api-access-znvsp\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data-custom\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njnrd\" (UniqueName: \"kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-logs\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466378 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56nq9\" (UniqueName: \"kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw96s\" (UniqueName: \"kubernetes.io/projected/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-kube-api-access-kw96s\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466450 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-logs\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data-custom\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.466738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-combined-ca-bundle\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.471629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-logs\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.472613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-logs\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.473548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data-custom\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.476743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.476877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-config-data-custom\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.479978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-combined-ca-bundle\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.480789 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-config-data\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.481029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-combined-ca-bundle\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.490831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw96s\" (UniqueName: \"kubernetes.io/projected/c8af003e-d2bd-4748-b27c-5cdcb2e7914f-kube-api-access-kw96s\") pod \"barbican-worker-659c5f77bf-p8tvx\" (UID: \"c8af003e-d2bd-4748-b27c-5cdcb2e7914f\") " pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.494079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znvsp\" (UniqueName: \"kubernetes.io/projected/6d71cfc4-b9dc-4fe1-be63-7da133a49f08-kube-api-access-znvsp\") pod \"barbican-keystone-listener-768f98d44b-nmkh7\" (UID: \"6d71cfc4-b9dc-4fe1-be63-7da133a49f08\") " pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.543043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-659c5f77bf-p8tvx" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.572945 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573484 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb2d9\" (UniqueName: \"kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts\") pod \"60f01a69-1d04-4788-b13d-f944b5f37b06\" (UID: \"60f01a69-1d04-4788-b13d-f944b5f37b06\") " Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.573897 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574216 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njnrd\" (UniqueName: \"kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56nq9\" (UniqueName: \"kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574673 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574754 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.576281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.576686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.574406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.576973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9" (OuterVolumeSpecName: "kube-api-access-kb2d9") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "kube-api-access-kb2d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.577306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.577976 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.578384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.578491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.581474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts" (OuterVolumeSpecName: "scripts") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.583309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.593218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.598683 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njnrd\" (UniqueName: \"kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd\") pod \"dnsmasq-dns-74d57497c5-s4cfd\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.599213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56nq9\" (UniqueName: \"kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.602201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle\") pod \"barbican-api-85ffd7569d-mt675\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.604912 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.647889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.679998 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.680350 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.680365 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60f01a69-1d04-4788-b13d-f944b5f37b06-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.680378 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb2d9\" (UniqueName: \"kubernetes.io/projected/60f01a69-1d04-4788-b13d-f944b5f37b06-kube-api-access-kb2d9\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.680392 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.717528 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.721968 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.745713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.753128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data" (OuterVolumeSpecName: "config-data") pod "60f01a69-1d04-4788-b13d-f944b5f37b06" (UID: "60f01a69-1d04-4788-b13d-f944b5f37b06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.781914 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60f01a69-1d04-4788-b13d-f944b5f37b06-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.947385 4869 generic.go:334] "Generic (PLEG): container finished" podID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerID="76b80c712aec5b31ce3165b6defd2939fa432bc2b9ea72c8f6c8a57fff2da6ff" exitCode=137 Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.947425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerDied","Data":"76b80c712aec5b31ce3165b6defd2939fa432bc2b9ea72c8f6c8a57fff2da6ff"} Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.966900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60f01a69-1d04-4788-b13d-f944b5f37b06","Type":"ContainerDied","Data":"e856645248b7c3a3eb211f61bc1e7dfa3bc5a134ce8c170e40f38824358f68cc"} Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.966956 4869 scope.go:117] "RemoveContainer" containerID="8476a8365f20ad972c20819eba6371c994ac8e485ca41da57441a4b70535e76a" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.967111 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.985515 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e0825fa-2453-46a0-b677-79808694bba8" containerID="a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124" exitCode=1 Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.985598 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerDied","Data":"a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124"} Mar 14 09:20:15 crc kubenswrapper[4869]: I0314 09:20:15.986258 4869 scope.go:117] "RemoveContainer" containerID="a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124" Mar 14 09:20:15 crc kubenswrapper[4869]: E0314 09:20:15.986594 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0e0825fa-2453-46a0-b677-79808694bba8)\"" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.010736 4869 scope.go:117] "RemoveContainer" containerID="935ea60844a206e8152a8de0ba07bde6ce51760932da8ec64e419d06a379cc2b" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.097067 4869 scope.go:117] "RemoveContainer" containerID="d9439a2f209adf9d7b1d2bb9a0a3cff8f81229588faa1acc51d30baebcee1776" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.106447 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.132675 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.143229 4869 scope.go:117] "RemoveContainer" containerID="8d57ba50d496baf7056dfcaa55cade1e2b3b21f4bf62759e539fb36e9105bb85" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.187044 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.202195 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.202311 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.205461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.205690 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.212906 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-659c5f77bf-p8tvx"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.310736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311163 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhck\" (UniqueName: \"kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311289 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.311312 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.405302 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.412431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.412504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.412558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhck\" (UniqueName: \"kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.413296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.414958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.415048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.415107 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.415213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.416237 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.425071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.434209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.434915 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.457957 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.460274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhck\" (UniqueName: \"kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck\") pod \"ceilometer-0\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.517060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs\") pod \"ed19444d-bcb2-4703-9de9-14828f14fed1\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.517415 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts\") pod \"ed19444d-bcb2-4703-9de9-14828f14fed1\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.517766 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zfjr\" (UniqueName: \"kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr\") pod \"ed19444d-bcb2-4703-9de9-14828f14fed1\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.517969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data\") pod \"ed19444d-bcb2-4703-9de9-14828f14fed1\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.518105 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key\") pod \"ed19444d-bcb2-4703-9de9-14828f14fed1\" (UID: \"ed19444d-bcb2-4703-9de9-14828f14fed1\") " Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.517800 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs" (OuterVolumeSpecName: "logs") pod "ed19444d-bcb2-4703-9de9-14828f14fed1" (UID: "ed19444d-bcb2-4703-9de9-14828f14fed1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.519173 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed19444d-bcb2-4703-9de9-14828f14fed1-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.523737 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr" (OuterVolumeSpecName: "kube-api-access-7zfjr") pod "ed19444d-bcb2-4703-9de9-14828f14fed1" (UID: "ed19444d-bcb2-4703-9de9-14828f14fed1"). InnerVolumeSpecName "kube-api-access-7zfjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.525691 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ed19444d-bcb2-4703-9de9-14828f14fed1" (UID: "ed19444d-bcb2-4703-9de9-14828f14fed1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.540908 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.571344 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts" (OuterVolumeSpecName: "scripts") pod "ed19444d-bcb2-4703-9de9-14828f14fed1" (UID: "ed19444d-bcb2-4703-9de9-14828f14fed1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.586600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-768f98d44b-nmkh7"] Mar 14 09:20:16 crc kubenswrapper[4869]: W0314 09:20:16.588639 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod769cfb80_dc46_4b86_aabd_c038375e5c3d.slice/crio-1ac3173f31d39cb4060de49d7dfb588ffb467e7e194b7e8371ef75d13e9a6554 WatchSource:0}: Error finding container 1ac3173f31d39cb4060de49d7dfb588ffb467e7e194b7e8371ef75d13e9a6554: Status 404 returned error can't find the container with id 1ac3173f31d39cb4060de49d7dfb588ffb467e7e194b7e8371ef75d13e9a6554 Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.597327 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data" (OuterVolumeSpecName: "config-data") pod "ed19444d-bcb2-4703-9de9-14828f14fed1" (UID: "ed19444d-bcb2-4703-9de9-14828f14fed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.608187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.621670 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zfjr\" (UniqueName: \"kubernetes.io/projected/ed19444d-bcb2-4703-9de9-14828f14fed1-kube-api-access-7zfjr\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.621723 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.621739 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed19444d-bcb2-4703-9de9-14828f14fed1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.621753 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed19444d-bcb2-4703-9de9-14828f14fed1-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:16 crc kubenswrapper[4869]: I0314 09:20:16.725970 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:16 crc kubenswrapper[4869]: W0314 09:20:16.731464 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33df5438_d9cd_4818_be10_a5d630a27193.slice/crio-e60e6a883e1234858f5998bd8ba97e77d4b269a2fb8d4e2b09842e9006c84713 WatchSource:0}: Error finding container e60e6a883e1234858f5998bd8ba97e77d4b269a2fb8d4e2b09842e9006c84713: Status 404 returned error can't find the container with id e60e6a883e1234858f5998bd8ba97e77d4b269a2fb8d4e2b09842e9006c84713 Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.007242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerStarted","Data":"e60e6a883e1234858f5998bd8ba97e77d4b269a2fb8d4e2b09842e9006c84713"} Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.009108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659c5f77bf-p8tvx" event={"ID":"c8af003e-d2bd-4748-b27c-5cdcb2e7914f","Type":"ContainerStarted","Data":"dba48cd376fbc8419312287ce84905b59eaf87d7f0b191b03056ff4764b25d27"} Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.012025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7558987fbf-ps5jx" event={"ID":"ed19444d-bcb2-4703-9de9-14828f14fed1","Type":"ContainerDied","Data":"b2d931c289ef917fa1be54a23cfa5c49159e76af5d2ee4afd89037370b19da8b"} Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.012055 4869 scope.go:117] "RemoveContainer" containerID="050e858f30980bf63823f13a4d44bbaa98479252cd1164bba19f482f360487aa" Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.012144 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7558987fbf-ps5jx" Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.013743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" event={"ID":"6d71cfc4-b9dc-4fe1-be63-7da133a49f08","Type":"ContainerStarted","Data":"beb26a9dcf92045e2bcca672b1d0047b6abd339f0017131563f722c22068c223"} Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.017909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" event={"ID":"769cfb80-dc46-4b86-aabd-c038375e5c3d","Type":"ContainerStarted","Data":"1ac3173f31d39cb4060de49d7dfb588ffb467e7e194b7e8371ef75d13e9a6554"} Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.056593 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.063462 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7558987fbf-ps5jx"] Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.156091 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.200670 4869 scope.go:117] "RemoveContainer" containerID="76b80c712aec5b31ce3165b6defd2939fa432bc2b9ea72c8f6c8a57fff2da6ff" Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.713581 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f01a69-1d04-4788-b13d-f944b5f37b06" path="/var/lib/kubelet/pods/60f01a69-1d04-4788-b13d-f944b5f37b06/volumes" Mar 14 09:20:17 crc kubenswrapper[4869]: I0314 09:20:17.714672 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" path="/var/lib/kubelet/pods/ed19444d-bcb2-4703-9de9-14828f14fed1/volumes" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.028076 4869 generic.go:334] "Generic (PLEG): container finished" podID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerID="0a0656a73f164123e855b5c5c183c393d6c6cc2bea2ca5dcb0de4b3987676f33" exitCode=137 Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.028145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerDied","Data":"0a0656a73f164123e855b5c5c183c393d6c6cc2bea2ca5dcb0de4b3987676f33"} Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.030958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerStarted","Data":"b6f3e4648f88aab3bc9168d26d16b82a55914ac4120fde7a6d48237a21f9352a"} Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.488682 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66db8b8f5d-6bxhh"] Mar 14 09:20:18 crc kubenswrapper[4869]: E0314 09:20:18.489103 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.489121 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon" Mar 14 09:20:18 crc kubenswrapper[4869]: E0314 09:20:18.489133 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon-log" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.489139 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon-log" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.489335 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon-log" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.489370 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed19444d-bcb2-4703-9de9-14828f14fed1" containerName="horizon" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.490439 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.494278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.503671 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.541282 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66db8b8f5d-6bxhh"] Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.564793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data-custom\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/982517b3-3240-45ca-9dcd-79f7a7a648a1-kube-api-access-z6zlx\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565433 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982517b3-3240-45ca-9dcd-79f7a7a648a1-logs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-public-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565703 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-combined-ca-bundle\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.565819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-internal-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.667811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982517b3-3240-45ca-9dcd-79f7a7a648a1-logs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.667887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-public-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.667918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-combined-ca-bundle\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.667939 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-internal-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.667995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data-custom\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.668017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.668091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/982517b3-3240-45ca-9dcd-79f7a7a648a1-kube-api-access-z6zlx\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.669025 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/982517b3-3240-45ca-9dcd-79f7a7a648a1-logs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.673043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-public-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.673869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-internal-tls-certs\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.674371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-combined-ca-bundle\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.675098 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data-custom\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.687178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zlx\" (UniqueName: \"kubernetes.io/projected/982517b3-3240-45ca-9dcd-79f7a7a648a1-kube-api-access-z6zlx\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.687375 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982517b3-3240-45ca-9dcd-79f7a7a648a1-config-data\") pod \"barbican-api-66db8b8f5d-6bxhh\" (UID: \"982517b3-3240-45ca-9dcd-79f7a7a648a1\") " pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:18 crc kubenswrapper[4869]: I0314 09:20:18.823158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:19 crc kubenswrapper[4869]: I0314 09:20:19.283082 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66db8b8f5d-6bxhh"] Mar 14 09:20:20 crc kubenswrapper[4869]: I0314 09:20:20.050090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66db8b8f5d-6bxhh" event={"ID":"982517b3-3240-45ca-9dcd-79f7a7a648a1","Type":"ContainerStarted","Data":"54c5e59b7e592f63a28e3f6ef585dd3fe7d4aad42f12298f9eaad110666603c2"} Mar 14 09:20:20 crc kubenswrapper[4869]: I0314 09:20:20.238094 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:20 crc kubenswrapper[4869]: I0314 09:20:20.238477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:20 crc kubenswrapper[4869]: I0314 09:20:20.239427 4869 scope.go:117] "RemoveContainer" containerID="a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124" Mar 14 09:20:20 crc kubenswrapper[4869]: E0314 09:20:20.239816 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0e0825fa-2453-46a0-b677-79808694bba8)\"" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.174261 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.320325 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs\") pod \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.320845 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts\") pod \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.320940 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwd49\" (UniqueName: \"kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49\") pod \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.321008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key\") pod \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.321221 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data\") pod \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\" (UID: \"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62\") " Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.321243 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs" (OuterVolumeSpecName: "logs") pod "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" (UID: "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.321705 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.328692 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49" (OuterVolumeSpecName: "kube-api-access-pwd49") pod "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" (UID: "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62"). InnerVolumeSpecName "kube-api-access-pwd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.328773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" (UID: "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.352437 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts" (OuterVolumeSpecName: "scripts") pod "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" (UID: "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.352465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data" (OuterVolumeSpecName: "config-data") pod "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" (UID: "a87045a9-e2a7-4c0e-b98e-7684cdfb6a62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.439409 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.439445 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.439455 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwd49\" (UniqueName: \"kubernetes.io/projected/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-kube-api-access-pwd49\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:21 crc kubenswrapper[4869]: I0314 09:20:21.439463 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.077856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66db8b8f5d-6bxhh" event={"ID":"982517b3-3240-45ca-9dcd-79f7a7a648a1","Type":"ContainerStarted","Data":"add40c9d1c9ce46eaf4d95faadf0ff21a36eb2c2e242950c00a6d1e58d42ae60"} Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.081319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5858c9f6c-clfct" event={"ID":"a87045a9-e2a7-4c0e-b98e-7684cdfb6a62","Type":"ContainerDied","Data":"f6121c58f5e4bc4214a6801e73be931768dc328afb602953968514e5b2fb6cdc"} Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.081358 4869 scope.go:117] "RemoveContainer" containerID="f4e07bdc55d0d34423f7552cd97943966eae7eace67df01331f16ec3c7632b39" Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.081446 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5858c9f6c-clfct" Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.172861 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:20:22 crc kubenswrapper[4869]: I0314 09:20:22.191600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5858c9f6c-clfct"] Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.093796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66db8b8f5d-6bxhh" event={"ID":"982517b3-3240-45ca-9dcd-79f7a7a648a1","Type":"ContainerStarted","Data":"7b65903ff6a4e20c730dc564d71a3a0b3cd69605a9a12c5f8a3031a5f1c7c988"} Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.094272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.096829 4869 generic.go:334] "Generic (PLEG): container finished" podID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerID="3206a4457d6593321b1e2cbb60ca09e75c07ba2ff201f017aad11c4a3cc0a385" exitCode=0 Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.096908 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" event={"ID":"769cfb80-dc46-4b86-aabd-c038375e5c3d","Type":"ContainerDied","Data":"3206a4457d6593321b1e2cbb60ca09e75c07ba2ff201f017aad11c4a3cc0a385"} Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.100807 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerStarted","Data":"90cdb5ba6d1159050caa9cbfc9dd0273470352cf4dcb81fa54e1d12d6290b805"} Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.128907 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66db8b8f5d-6bxhh" podStartSLOduration=5.128887195 podStartE2EDuration="5.128887195s" podCreationTimestamp="2026-03-14 09:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:23.1168941 +0000 UTC m=+1376.089176183" watchObservedRunningTime="2026-03-14 09:20:23.128887195 +0000 UTC m=+1376.101169258" Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.304333 4869 scope.go:117] "RemoveContainer" containerID="0a0656a73f164123e855b5c5c183c393d6c6cc2bea2ca5dcb0de4b3987676f33" Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.719428 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" path="/var/lib/kubelet/pods/a87045a9-e2a7-4c0e-b98e-7684cdfb6a62/volumes" Mar 14 09:20:23 crc kubenswrapper[4869]: I0314 09:20:23.824572 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.111655 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659c5f77bf-p8tvx" event={"ID":"c8af003e-d2bd-4748-b27c-5cdcb2e7914f","Type":"ContainerStarted","Data":"8e064a880db1a9a5a1aed99cb0be6000d724a85ca888e0a8dc19c88c1f49c078"} Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.113743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" event={"ID":"6d71cfc4-b9dc-4fe1-be63-7da133a49f08","Type":"ContainerStarted","Data":"1c77b6acf97dbf74006d6c156f11830136a83bcd6c4b8b745f5177584c9b23a4"} Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.114961 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" event={"ID":"769cfb80-dc46-4b86-aabd-c038375e5c3d","Type":"ContainerStarted","Data":"b9e45b12bc841e965b587e627cb5a1b7a777825c9a1eab0aba8b898165a9225a"} Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.115806 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.117569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerStarted","Data":"0593c0e4477c49653279421c2cb2e157da54aafd010b2b01509239c45b039657"} Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.120856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerStarted","Data":"fbaa7dddb2c56dcf7d704ab992ebbfd7f4bf4ee9fe27d273778a029e91d41b84"} Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.120902 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.121073 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.134391 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" podStartSLOduration=9.134374637 podStartE2EDuration="9.134374637s" podCreationTimestamp="2026-03-14 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:24.132365486 +0000 UTC m=+1377.104647549" watchObservedRunningTime="2026-03-14 09:20:24.134374637 +0000 UTC m=+1377.106656690" Mar 14 09:20:24 crc kubenswrapper[4869]: I0314 09:20:24.158864 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85ffd7569d-mt675" podStartSLOduration=9.1588435 podStartE2EDuration="9.1588435s" podCreationTimestamp="2026-03-14 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:24.157084607 +0000 UTC m=+1377.129366680" watchObservedRunningTime="2026-03-14 09:20:24.1588435 +0000 UTC m=+1377.131125553" Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.131991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerStarted","Data":"e5730cfe5eb7ea5c7d4db6f8025ceaf8e5cf6488d4738dc4439d1b102009eecf"} Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.134827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-659c5f77bf-p8tvx" event={"ID":"c8af003e-d2bd-4748-b27c-5cdcb2e7914f","Type":"ContainerStarted","Data":"cc888d082084a79f53187543fe772c262444b1b4b01150304774cb643410a9ab"} Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.137312 4869 generic.go:334] "Generic (PLEG): container finished" podID="5806f1f4-83ae-4f76-ba42-f4943cbef129" containerID="16f66cc143b425762b7476c0fbcc17d5bb966de3b31c9ed3a53bff59927136da" exitCode=0 Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.137473 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jmtdx" event={"ID":"5806f1f4-83ae-4f76-ba42-f4943cbef129","Type":"ContainerDied","Data":"16f66cc143b425762b7476c0fbcc17d5bb966de3b31c9ed3a53bff59927136da"} Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.142715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" event={"ID":"6d71cfc4-b9dc-4fe1-be63-7da133a49f08","Type":"ContainerStarted","Data":"d51b6e35e08787a493acca7e0b39f23c2c8deb421f06f0b148fdb7a5a4540302"} Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.171145 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-659c5f77bf-p8tvx" podStartSLOduration=3.022802361 podStartE2EDuration="10.171124168s" podCreationTimestamp="2026-03-14 09:20:15 +0000 UTC" firstStartedPulling="2026-03-14 09:20:16.234450673 +0000 UTC m=+1369.206732726" lastFinishedPulling="2026-03-14 09:20:23.38277247 +0000 UTC m=+1376.355054533" observedRunningTime="2026-03-14 09:20:25.153669537 +0000 UTC m=+1378.125951620" watchObservedRunningTime="2026-03-14 09:20:25.171124168 +0000 UTC m=+1378.143406221" Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.186279 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-768f98d44b-nmkh7" podStartSLOduration=3.341068995 podStartE2EDuration="10.186256992s" podCreationTimestamp="2026-03-14 09:20:15 +0000 UTC" firstStartedPulling="2026-03-14 09:20:16.592876917 +0000 UTC m=+1369.565158970" lastFinishedPulling="2026-03-14 09:20:23.438064904 +0000 UTC m=+1376.410346967" observedRunningTime="2026-03-14 09:20:25.174977024 +0000 UTC m=+1378.147259077" watchObservedRunningTime="2026-03-14 09:20:25.186256992 +0000 UTC m=+1378.158539045" Mar 14 09:20:25 crc kubenswrapper[4869]: I0314 09:20:25.715697 4869 scope.go:117] "RemoveContainer" containerID="3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.158568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerStarted","Data":"265f4c705ccdc82ce7cc6d355c6fccdd972d2c0deba1c2173a04c40ad5f72ef2"} Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.555731 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.645483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.645604 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.645688 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwxwg\" (UniqueName: \"kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.645791 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.645873 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.646443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.647334 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data\") pod \"5806f1f4-83ae-4f76-ba42-f4943cbef129\" (UID: \"5806f1f4-83ae-4f76-ba42-f4943cbef129\") " Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.647972 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5806f1f4-83ae-4f76-ba42-f4943cbef129-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.653816 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts" (OuterVolumeSpecName: "scripts") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.653818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.655012 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg" (OuterVolumeSpecName: "kube-api-access-jwxwg") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "kube-api-access-jwxwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.687907 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.712652 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data" (OuterVolumeSpecName: "config-data") pod "5806f1f4-83ae-4f76-ba42-f4943cbef129" (UID: "5806f1f4-83ae-4f76-ba42-f4943cbef129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.749892 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwxwg\" (UniqueName: \"kubernetes.io/projected/5806f1f4-83ae-4f76-ba42-f4943cbef129-kube-api-access-jwxwg\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.749938 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.749956 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.749968 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:26 crc kubenswrapper[4869]: I0314 09:20:26.749979 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5806f1f4-83ae-4f76-ba42-f4943cbef129-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.179783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jmtdx" event={"ID":"5806f1f4-83ae-4f76-ba42-f4943cbef129","Type":"ContainerDied","Data":"d513a6790f1602a25f96ef0385b25181addc525a583eaba2b2ecb5fae3a61722"} Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.179928 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jmtdx" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.181394 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d513a6790f1602a25f96ef0385b25181addc525a583eaba2b2ecb5fae3a61722" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.185178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f"} Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.553557 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:27 crc kubenswrapper[4869]: E0314 09:20:27.554230 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" containerName="cinder-db-sync" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" containerName="cinder-db-sync" Mar 14 09:20:27 crc kubenswrapper[4869]: E0314 09:20:27.554265 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon-log" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554272 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon-log" Mar 14 09:20:27 crc kubenswrapper[4869]: E0314 09:20:27.554309 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554315 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon-log" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554527 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87045a9-e2a7-4c0e-b98e-7684cdfb6a62" containerName="horizon" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.554535 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" containerName="cinder-db-sync" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.555487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.563277 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.563468 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.563582 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.563820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rxgp" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.582615 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.602619 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.602948 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="dnsmasq-dns" containerID="cri-o://b9e45b12bc841e965b587e627cb5a1b7a777825c9a1eab0aba8b898165a9225a" gracePeriod=10 Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.646599 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.670711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724054 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724228 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724402 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724454 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.724503 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rd7d\" (UniqueName: \"kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.848840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849038 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rd7d\" (UniqueName: \"kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849879 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrj6\" (UniqueName: \"kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849949 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.849978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.850058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.850468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.862449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.933301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.933608 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.933755 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.934696 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.935599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.937158 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.940533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.964542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.964893 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.964920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrj6\" (UniqueName: \"kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.964935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.969368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.979838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.980950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.981337 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.988770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:27 crc kubenswrapper[4869]: I0314 09:20:27.998229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rd7d\" (UniqueName: \"kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.002075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.012208 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrj6\" (UniqueName: \"kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6\") pod \"dnsmasq-dns-7bc87bfbff-xwzkt\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.022613 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.025258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.038330 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.053949 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gch9\" (UniqueName: \"kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.066927 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.152497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.168866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.168937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.168962 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gch9\" (UniqueName: \"kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.169002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.169050 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.169099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.169148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.172854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.173077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.173958 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.174752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.181907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.182246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.190396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gch9\" (UniqueName: \"kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9\") pod \"cinder-api-0\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.200865 4869 generic.go:334] "Generic (PLEG): container finished" podID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerID="b9e45b12bc841e965b587e627cb5a1b7a777825c9a1eab0aba8b898165a9225a" exitCode=0 Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.200922 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" event={"ID":"769cfb80-dc46-4b86-aabd-c038375e5c3d","Type":"ContainerDied","Data":"b9e45b12bc841e965b587e627cb5a1b7a777825c9a1eab0aba8b898165a9225a"} Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.211102 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rxgp" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.218891 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.429412 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.438052 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480535 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njnrd\" (UniqueName: \"kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.480913 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config\") pod \"769cfb80-dc46-4b86-aabd-c038375e5c3d\" (UID: \"769cfb80-dc46-4b86-aabd-c038375e5c3d\") " Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.495694 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd" (OuterVolumeSpecName: "kube-api-access-njnrd") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "kube-api-access-njnrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.585305 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njnrd\" (UniqueName: \"kubernetes.io/projected/769cfb80-dc46-4b86-aabd-c038375e5c3d-kube-api-access-njnrd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.601428 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.616197 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.619387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.633052 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.644770 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config" (OuterVolumeSpecName: "config") pod "769cfb80-dc46-4b86-aabd-c038375e5c3d" (UID: "769cfb80-dc46-4b86-aabd-c038375e5c3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.687194 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.687230 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.687242 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.687250 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.687259 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/769cfb80-dc46-4b86-aabd-c038375e5c3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:28 crc kubenswrapper[4869]: I0314 09:20:28.803490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.002708 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.245186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" event={"ID":"769cfb80-dc46-4b86-aabd-c038375e5c3d","Type":"ContainerDied","Data":"1ac3173f31d39cb4060de49d7dfb588ffb467e7e194b7e8371ef75d13e9a6554"} Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.245253 4869 scope.go:117] "RemoveContainer" containerID="b9e45b12bc841e965b587e627cb5a1b7a777825c9a1eab0aba8b898165a9225a" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.246769 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d57497c5-s4cfd" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.260285 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.263071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerStarted","Data":"0f0aecb9e45b2ab6579445ab476a5b527abac9725dfc454b404f5df2c283eb93"} Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.264471 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.269688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerStarted","Data":"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5"} Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.269742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerStarted","Data":"7346d5dbd0c339026c16a2b2ea43ca522319b515cf8d297dd0ee3529edffcf71"} Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.275732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerStarted","Data":"792415e97d3cc67b630a19cfc6d5346298a605ea73efbafee46109bb5d2b48d3"} Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.309610 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.434292625 podStartE2EDuration="13.309575156s" podCreationTimestamp="2026-03-14 09:20:16 +0000 UTC" firstStartedPulling="2026-03-14 09:20:17.254869602 +0000 UTC m=+1370.227151655" lastFinishedPulling="2026-03-14 09:20:28.130152133 +0000 UTC m=+1381.102434186" observedRunningTime="2026-03-14 09:20:29.289455669 +0000 UTC m=+1382.261737722" watchObservedRunningTime="2026-03-14 09:20:29.309575156 +0000 UTC m=+1382.281857209" Mar 14 09:20:29 crc kubenswrapper[4869]: W0314 09:20:29.401735 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37eaea15_d213_4622_a690_e913eb85bb45.slice/crio-334e9bbb874e5b84229bb7a285c7c0280a5c83b365b8513da831112d730a371b WatchSource:0}: Error finding container 334e9bbb874e5b84229bb7a285c7c0280a5c83b365b8513da831112d730a371b: Status 404 returned error can't find the container with id 334e9bbb874e5b84229bb7a285c7c0280a5c83b365b8513da831112d730a371b Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.425694 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:29 crc kubenswrapper[4869]: E0314 09:20:29.444133 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1d19a6_ed3e_43b4_b629_df50141d4ae8.slice/crio-7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.451177 4869 scope.go:117] "RemoveContainer" containerID="3206a4457d6593321b1e2cbb60ca09e75c07ba2ff201f017aad11c4a3cc0a385" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.458463 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74d57497c5-s4cfd"] Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.704279 4869 scope.go:117] "RemoveContainer" containerID="6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3" Mar 14 09:20:29 crc kubenswrapper[4869]: I0314 09:20:29.743631 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" path="/var/lib/kubelet/pods/769cfb80-dc46-4b86-aabd-c038375e5c3d/volumes" Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.237466 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.238568 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.238656 4869 scope.go:117] "RemoveContainer" containerID="a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124" Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.428806 4869 generic.go:334] "Generic (PLEG): container finished" podID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerID="7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5" exitCode=0 Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.428877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerDied","Data":"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5"} Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.470059 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.533749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b"} Mar 14 09:20:30 crc kubenswrapper[4869]: I0314 09:20:30.560092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerStarted","Data":"334e9bbb874e5b84229bb7a285c7c0280a5c83b365b8513da831112d730a371b"} Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.058405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.330493 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66db8b8f5d-6bxhh" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.418586 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.419222 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" containerID="cri-o://90cdb5ba6d1159050caa9cbfc9dd0273470352cf4dcb81fa54e1d12d6290b805" gracePeriod=30 Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.419357 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" containerID="cri-o://fbaa7dddb2c56dcf7d704ab992ebbfd7f4bf4ee9fe27d273778a029e91d41b84" gracePeriod=30 Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.426728 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": EOF" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.427216 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": EOF" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.427315 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": EOF" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.427525 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": EOF" Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.580841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerStarted","Data":"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f"} Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.588738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerStarted","Data":"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b"} Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.608742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerStarted","Data":"264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f"} Mar 14 09:20:31 crc kubenswrapper[4869]: I0314 09:20:31.625675 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerStarted","Data":"b289444978d815b26a0aafdbf61805f45f6a4b3e973d4f65210f57a07f696363"} Mar 14 09:20:32 crc kubenswrapper[4869]: I0314 09:20:32.647694 4869 generic.go:334] "Generic (PLEG): container finished" podID="33df5438-d9cd-4818-be10-a5d630a27193" containerID="90cdb5ba6d1159050caa9cbfc9dd0273470352cf4dcb81fa54e1d12d6290b805" exitCode=143 Mar 14 09:20:32 crc kubenswrapper[4869]: I0314 09:20:32.649209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerDied","Data":"90cdb5ba6d1159050caa9cbfc9dd0273470352cf4dcb81fa54e1d12d6290b805"} Mar 14 09:20:32 crc kubenswrapper[4869]: I0314 09:20:32.649665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.690256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerStarted","Data":"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead"} Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.700462 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerStarted","Data":"b8502299f036ba83f41f14e62baed0e302d14c935f2ef26aef6b4a2fc4b3ffed"} Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.700672 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api-log" containerID="cri-o://b289444978d815b26a0aafdbf61805f45f6a4b3e973d4f65210f57a07f696363" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.700687 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api" containerID="cri-o://b8502299f036ba83f41f14e62baed0e302d14c935f2ef26aef6b4a2fc4b3ffed" gracePeriod=30 Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.727253 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" podStartSLOduration=6.727226263 podStartE2EDuration="6.727226263s" podCreationTimestamp="2026-03-14 09:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:32.674819854 +0000 UTC m=+1385.647101907" watchObservedRunningTime="2026-03-14 09:20:33.727226263 +0000 UTC m=+1386.699508326" Mar 14 09:20:33 crc kubenswrapper[4869]: I0314 09:20:33.737076 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.155541168 podStartE2EDuration="6.737052006s" podCreationTimestamp="2026-03-14 09:20:27 +0000 UTC" firstStartedPulling="2026-03-14 09:20:29.040664011 +0000 UTC m=+1382.012946064" lastFinishedPulling="2026-03-14 09:20:29.622174849 +0000 UTC m=+1382.594456902" observedRunningTime="2026-03-14 09:20:33.729460688 +0000 UTC m=+1386.701742761" watchObservedRunningTime="2026-03-14 09:20:33.737052006 +0000 UTC m=+1386.709334079" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.047648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.060072 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6966c9cd66-p4jg9" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.074933 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.074913402 podStartE2EDuration="7.074913402s" podCreationTimestamp="2026-03-14 09:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:33.752586169 +0000 UTC m=+1386.724868252" watchObservedRunningTime="2026-03-14 09:20:34.074913402 +0000 UTC m=+1387.047195455" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.411978 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.412579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.539273 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.549623 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.731041 4869 generic.go:334] "Generic (PLEG): container finished" podID="37eaea15-d213-4622-a690-e913eb85bb45" containerID="b8502299f036ba83f41f14e62baed0e302d14c935f2ef26aef6b4a2fc4b3ffed" exitCode=0 Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.731300 4869 generic.go:334] "Generic (PLEG): container finished" podID="37eaea15-d213-4622-a690-e913eb85bb45" containerID="b289444978d815b26a0aafdbf61805f45f6a4b3e973d4f65210f57a07f696363" exitCode=143 Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.732149 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerDied","Data":"b8502299f036ba83f41f14e62baed0e302d14c935f2ef26aef6b4a2fc4b3ffed"} Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.732173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerDied","Data":"b289444978d815b26a0aafdbf61805f45f6a4b3e973d4f65210f57a07f696363"} Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.851997 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:34 crc kubenswrapper[4869]: I0314 09:20:34.948951 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-857df8f9c4-4hrpr" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.027861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.027921 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.027967 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.027988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gch9\" (UniqueName: \"kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.028109 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.028127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.028210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id\") pod \"37eaea15-d213-4622-a690-e913eb85bb45\" (UID: \"37eaea15-d213-4622-a690-e913eb85bb45\") " Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.028589 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs" (OuterVolumeSpecName: "logs") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.028687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.037273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9" (OuterVolumeSpecName: "kube-api-access-7gch9") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "kube-api-access-7gch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.041225 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts" (OuterVolumeSpecName: "scripts") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.053656 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.116612 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.144298 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.146347 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37eaea15-d213-4622-a690-e913eb85bb45-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.146736 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37eaea15-d213-4622-a690-e913eb85bb45-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.146859 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.147349 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.147638 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gch9\" (UniqueName: \"kubernetes.io/projected/37eaea15-d213-4622-a690-e913eb85bb45-kube-api-access-7gch9\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.183744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data" (OuterVolumeSpecName: "config-data") pod "37eaea15-d213-4622-a690-e913eb85bb45" (UID: "37eaea15-d213-4622-a690-e913eb85bb45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.252745 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37eaea15-d213-4622-a690-e913eb85bb45-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.741845 4869 generic.go:334] "Generic (PLEG): container finished" podID="8eda9c72-2272-45c8-b843-1c2b3c27f709" containerID="63afeaed1a472f127b732df459b006e941361f28145823c009d0f9d940099676" exitCode=0 Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.741938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6sb7f" event={"ID":"8eda9c72-2272-45c8-b843-1c2b3c27f709","Type":"ContainerDied","Data":"63afeaed1a472f127b732df459b006e941361f28145823c009d0f9d940099676"} Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.744332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"37eaea15-d213-4622-a690-e913eb85bb45","Type":"ContainerDied","Data":"334e9bbb874e5b84229bb7a285c7c0280a5c83b365b8513da831112d730a371b"} Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.744384 4869 scope.go:117] "RemoveContainer" containerID="b8502299f036ba83f41f14e62baed0e302d14c935f2ef26aef6b4a2fc4b3ffed" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.744416 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.770677 4869 scope.go:117] "RemoveContainer" containerID="b289444978d815b26a0aafdbf61805f45f6a4b3e973d4f65210f57a07f696363" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.791522 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.809908 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.822596 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:35 crc kubenswrapper[4869]: E0314 09:20:35.823296 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api-log" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823328 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api-log" Mar 14 09:20:35 crc kubenswrapper[4869]: E0314 09:20:35.823348 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823355 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api" Mar 14 09:20:35 crc kubenswrapper[4869]: E0314 09:20:35.823385 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="init" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823392 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="init" Mar 14 09:20:35 crc kubenswrapper[4869]: E0314 09:20:35.823404 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="dnsmasq-dns" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823410 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="dnsmasq-dns" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823714 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api-log" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823737 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="37eaea15-d213-4622-a690-e913eb85bb45" containerName="cinder-api" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.823766 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="769cfb80-dc46-4b86-aabd-c038375e5c3d" containerName="dnsmasq-dns" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.825251 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.829923 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.831414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.838224 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.838707 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864274 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-logs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-scripts\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj68j\" (UniqueName: \"kubernetes.io/projected/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-kube-api-access-cj68j\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.864592 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967815 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967924 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-logs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.967992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.968044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-scripts\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.968111 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj68j\" (UniqueName: \"kubernetes.io/projected/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-kube-api-access-cj68j\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.968193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.968878 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.969430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-logs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.974501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.975079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-scripts\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.976234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.977615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.988333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.988747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:35 crc kubenswrapper[4869]: I0314 09:20:35.991775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj68j\" (UniqueName: \"kubernetes.io/projected/e821fb2e-1d49-4ae2-9404-1e6efa9009a5-kube-api-access-cj68j\") pod \"cinder-api-0\" (UID: \"e821fb2e-1d49-4ae2-9404-1e6efa9009a5\") " pod="openstack/cinder-api-0" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.161045 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.511651 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.511809 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.562187 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:39550->10.217.0.183:9311: read: connection reset by peer" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.562187 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85ffd7569d-mt675" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.183:9311/healthcheck\": read tcp 10.217.0.2:39538->10.217.0.183:9311: read: connection reset by peer" Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.703782 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 14 09:20:36 crc kubenswrapper[4869]: W0314 09:20:36.704879 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode821fb2e_1d49_4ae2_9404_1e6efa9009a5.slice/crio-697f8911dc9e6d908db41a473cfa2f911bce0e8b209a7cc2813ca092382d512d WatchSource:0}: Error finding container 697f8911dc9e6d908db41a473cfa2f911bce0e8b209a7cc2813ca092382d512d: Status 404 returned error can't find the container with id 697f8911dc9e6d908db41a473cfa2f911bce0e8b209a7cc2813ca092382d512d Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.768369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e821fb2e-1d49-4ae2-9404-1e6efa9009a5","Type":"ContainerStarted","Data":"697f8911dc9e6d908db41a473cfa2f911bce0e8b209a7cc2813ca092382d512d"} Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.779925 4869 generic.go:334] "Generic (PLEG): container finished" podID="33df5438-d9cd-4818-be10-a5d630a27193" containerID="fbaa7dddb2c56dcf7d704ab992ebbfd7f4bf4ee9fe27d273778a029e91d41b84" exitCode=0 Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.779978 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerDied","Data":"fbaa7dddb2c56dcf7d704ab992ebbfd7f4bf4ee9fe27d273778a029e91d41b84"} Mar 14 09:20:36 crc kubenswrapper[4869]: I0314 09:20:36.969675 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.094922 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle\") pod \"33df5438-d9cd-4818-be10-a5d630a27193\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.095483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs\") pod \"33df5438-d9cd-4818-be10-a5d630a27193\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.095528 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data\") pod \"33df5438-d9cd-4818-be10-a5d630a27193\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.095552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom\") pod \"33df5438-d9cd-4818-be10-a5d630a27193\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.095675 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56nq9\" (UniqueName: \"kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9\") pod \"33df5438-d9cd-4818-be10-a5d630a27193\" (UID: \"33df5438-d9cd-4818-be10-a5d630a27193\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.104398 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9" (OuterVolumeSpecName: "kube-api-access-56nq9") pod "33df5438-d9cd-4818-be10-a5d630a27193" (UID: "33df5438-d9cd-4818-be10-a5d630a27193"). InnerVolumeSpecName "kube-api-access-56nq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.109820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs" (OuterVolumeSpecName: "logs") pod "33df5438-d9cd-4818-be10-a5d630a27193" (UID: "33df5438-d9cd-4818-be10-a5d630a27193"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.113662 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33df5438-d9cd-4818-be10-a5d630a27193" (UID: "33df5438-d9cd-4818-be10-a5d630a27193"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.177698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33df5438-d9cd-4818-be10-a5d630a27193" (UID: "33df5438-d9cd-4818-be10-a5d630a27193"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.219688 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data" (OuterVolumeSpecName: "config-data") pod "33df5438-d9cd-4818-be10-a5d630a27193" (UID: "33df5438-d9cd-4818-be10-a5d630a27193"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.220110 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33df5438-d9cd-4818-be10-a5d630a27193-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.220128 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.220138 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.220155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56nq9\" (UniqueName: \"kubernetes.io/projected/33df5438-d9cd-4818-be10-a5d630a27193-kube-api-access-56nq9\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.220164 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33df5438-d9cd-4818-be10-a5d630a27193-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.233611 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:37 crc kubenswrapper[4869]: E0314 09:20:37.234201 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.234223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" Mar 14 09:20:37 crc kubenswrapper[4869]: E0314 09:20:37.234241 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.234250 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.234494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api-log" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.234548 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="33df5438-d9cd-4818-be10-a5d630a27193" containerName="barbican-api" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.235498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.245130 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.245399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dvjt8" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.245604 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.286571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.322294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.322354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/35c6d1fd-be8f-4390-9199-bf573760717b-kube-api-access-4q2cr\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.322428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config-secret\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.322495 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.325361 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.427230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6f8c\" (UniqueName: \"kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c\") pod \"8eda9c72-2272-45c8-b843-1c2b3c27f709\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.427363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle\") pod \"8eda9c72-2272-45c8-b843-1c2b3c27f709\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.427402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config\") pod \"8eda9c72-2272-45c8-b843-1c2b3c27f709\" (UID: \"8eda9c72-2272-45c8-b843-1c2b3c27f709\") " Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.428043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.428070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/35c6d1fd-be8f-4390-9199-bf573760717b-kube-api-access-4q2cr\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.428120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config-secret\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.428168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.429358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.432480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-openstack-config-secret\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.437897 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c" (OuterVolumeSpecName: "kube-api-access-h6f8c") pod "8eda9c72-2272-45c8-b843-1c2b3c27f709" (UID: "8eda9c72-2272-45c8-b843-1c2b3c27f709"). InnerVolumeSpecName "kube-api-access-h6f8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.449216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/35c6d1fd-be8f-4390-9199-bf573760717b-kube-api-access-4q2cr\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.454306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6d1fd-be8f-4390-9199-bf573760717b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"35c6d1fd-be8f-4390-9199-bf573760717b\") " pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.465222 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config" (OuterVolumeSpecName: "config") pod "8eda9c72-2272-45c8-b843-1c2b3c27f709" (UID: "8eda9c72-2272-45c8-b843-1c2b3c27f709"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.468554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eda9c72-2272-45c8-b843-1c2b3c27f709" (UID: "8eda9c72-2272-45c8-b843-1c2b3c27f709"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.530231 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6f8c\" (UniqueName: \"kubernetes.io/projected/8eda9c72-2272-45c8-b843-1c2b3c27f709-kube-api-access-h6f8c\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.530272 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.530283 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8eda9c72-2272-45c8-b843-1c2b3c27f709-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.663282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.718455 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37eaea15-d213-4622-a690-e913eb85bb45" path="/var/lib/kubelet/pods/37eaea15-d213-4622-a690-e913eb85bb45/volumes" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.846207 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ffd7569d-mt675" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.851308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ffd7569d-mt675" event={"ID":"33df5438-d9cd-4818-be10-a5d630a27193","Type":"ContainerDied","Data":"e60e6a883e1234858f5998bd8ba97e77d4b269a2fb8d4e2b09842e9006c84713"} Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.851420 4869 scope.go:117] "RemoveContainer" containerID="fbaa7dddb2c56dcf7d704ab992ebbfd7f4bf4ee9fe27d273778a029e91d41b84" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.853636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e821fb2e-1d49-4ae2-9404-1e6efa9009a5","Type":"ContainerStarted","Data":"cab4d55220b9540d7cc4f156326e2aed4937f9bf5f3258965c8aba1174b9a9a4"} Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.862222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6sb7f" event={"ID":"8eda9c72-2272-45c8-b843-1c2b3c27f709","Type":"ContainerDied","Data":"d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e"} Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.862983 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e3f70dbd58daec23a47a4032447475b9b293cfb1d7c5fb8a9413b0bf995d4e" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.862798 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6sb7f" Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.886573 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.897074 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85ffd7569d-mt675"] Mar 14 09:20:37 crc kubenswrapper[4869]: I0314 09:20:37.900768 4869 scope.go:117] "RemoveContainer" containerID="90cdb5ba6d1159050caa9cbfc9dd0273470352cf4dcb81fa54e1d12d6290b805" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.000888 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.001120 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="dnsmasq-dns" containerID="cri-o://afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f" gracePeriod=10 Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.002857 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.039809 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:20:38 crc kubenswrapper[4869]: E0314 09:20:38.040195 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eda9c72-2272-45c8-b843-1c2b3c27f709" containerName="neutron-db-sync" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.040207 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eda9c72-2272-45c8-b843-1c2b3c27f709" containerName="neutron-db-sync" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.040398 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eda9c72-2272-45c8-b843-1c2b3c27f709" containerName="neutron-db-sync" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.046203 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.097486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.157358 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161668 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qccs\" (UniqueName: \"kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.161905 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.166148 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.167743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.172581 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.173030 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.173332 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.198006 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cr2r7" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.201422 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.229867 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.265139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.265314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.266189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.274792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.274954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.274990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qccs\" (UniqueName: \"kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275079 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j268f\" (UniqueName: \"kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.275236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.276085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.276216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.276503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.276871 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.311480 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.333208 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qccs\" (UniqueName: \"kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs\") pod \"dnsmasq-dns-7457bb75c5-8j2q5\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: W0314 09:20:38.367686 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35c6d1fd_be8f_4390_9199_bf573760717b.slice/crio-b2053f1a9fe39e38199205420701ab75517125908e2ea8a93404a6485ccb7fe9 WatchSource:0}: Error finding container b2053f1a9fe39e38199205420701ab75517125908e2ea8a93404a6485ccb7fe9: Status 404 returned error can't find the container with id b2053f1a9fe39e38199205420701ab75517125908e2ea8a93404a6485ccb7fe9 Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.383013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j268f\" (UniqueName: \"kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.383088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.383115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.383176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.383235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.389399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.410448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.419255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.422085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.427715 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j268f\" (UniqueName: \"kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f\") pod \"neutron-786bc4c684-kzltd\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.564727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.592752 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.617219 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.865972 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.911877 4869 generic.go:334] "Generic (PLEG): container finished" podID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerID="afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f" exitCode=0 Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.911947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerDied","Data":"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f"} Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.911976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" event={"ID":"ac1d19a6-ed3e-43b4-b629-df50141d4ae8","Type":"ContainerDied","Data":"7346d5dbd0c339026c16a2b2ea43ca522319b515cf8d297dd0ee3529edffcf71"} Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.911994 4869 scope.go:117] "RemoveContainer" containerID="afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.912114 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc87bfbff-xwzkt" Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.928200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"35c6d1fd-be8f-4390-9199-bf573760717b","Type":"ContainerStarted","Data":"b2053f1a9fe39e38199205420701ab75517125908e2ea8a93404a6485ccb7fe9"} Mar 14 09:20:38 crc kubenswrapper[4869]: I0314 09:20:38.980525 4869 scope.go:117] "RemoveContainer" containerID="7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.007253 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.007776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.007937 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.007980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.008074 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhrj6\" (UniqueName: \"kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.008119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb\") pod \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\" (UID: \"ac1d19a6-ed3e-43b4-b629-df50141d4ae8\") " Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.029478 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.049838 4869 scope.go:117] "RemoveContainer" containerID="afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f" Mar 14 09:20:39 crc kubenswrapper[4869]: E0314 09:20:39.058475 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f\": container with ID starting with afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f not found: ID does not exist" containerID="afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.058585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f"} err="failed to get container status \"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f\": rpc error: code = NotFound desc = could not find container \"afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f\": container with ID starting with afcb1766e128bdc3e5d62e20402232bd0698f1a0cf8e46df555ac92f3058c04f not found: ID does not exist" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.058635 4869 scope.go:117] "RemoveContainer" containerID="7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5" Mar 14 09:20:39 crc kubenswrapper[4869]: E0314 09:20:39.060529 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5\": container with ID starting with 7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5 not found: ID does not exist" containerID="7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.060580 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5"} err="failed to get container status \"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5\": rpc error: code = NotFound desc = could not find container \"7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5\": container with ID starting with 7067b2fd5259fe0651c39c50732f41d06517dba813af81e7714709b41867efa5 not found: ID does not exist" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.073151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6" (OuterVolumeSpecName: "kube-api-access-lhrj6") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "kube-api-access-lhrj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.112357 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhrj6\" (UniqueName: \"kubernetes.io/projected/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-kube-api-access-lhrj6\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.219661 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config" (OuterVolumeSpecName: "config") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.219726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.221982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.235910 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.245781 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac1d19a6-ed3e-43b4-b629-df50141d4ae8" (UID: "ac1d19a6-ed3e-43b4-b629-df50141d4ae8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.318356 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.318386 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.318397 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.318407 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.318416 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac1d19a6-ed3e-43b4-b629-df50141d4ae8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.409497 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.515161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.605700 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.605760 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.670084 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.687611 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bc87bfbff-xwzkt"] Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.729947 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33df5438-d9cd-4818-be10-a5d630a27193" path="/var/lib/kubelet/pods/33df5438-d9cd-4818-be10-a5d630a27193/volumes" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.730980 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" path="/var/lib/kubelet/pods/ac1d19a6-ed3e-43b4-b629-df50141d4ae8/volumes" Mar 14 09:20:39 crc kubenswrapper[4869]: E0314 09:20:39.887360 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1d19a6_ed3e_43b4_b629_df50141d4ae8.slice/crio-7346d5dbd0c339026c16a2b2ea43ca522319b515cf8d297dd0ee3529edffcf71\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1d19a6_ed3e_43b4_b629_df50141d4ae8.slice\": RecentStats: unable to find data in memory cache]" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.969674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerStarted","Data":"6e7fb6d3815dace322d2364536be9e45dc321e6f3f0e4bca136f6c8a344cbcb1"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.969751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerStarted","Data":"9bc3bb3f9e68f2f12cc5a3b0ea93a883dbfcc14eeb78ff563edafd07324bedac"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.974414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerStarted","Data":"d67a8e08e67d1b5c0a287048b6089dc3a5d54301860c2fb3fb7cf233d15ef1f6"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.974448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerStarted","Data":"0066cc4564f012cfaed467dcba567d0323f7a939c1b0edcde86aa1fbe57e1f41"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.980875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e821fb2e-1d49-4ae2-9404-1e6efa9009a5","Type":"ContainerStarted","Data":"ffb75b52e81db4522954421bf069c5fcfebe6aed48143dbfb6966e866acae513"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.985815 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.988698 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" exitCode=1 Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.988769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f"} Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.988806 4869 scope.go:117] "RemoveContainer" containerID="3683451d7c0cf12aa3458c040cb03518e7577ef80ba5cfb696720772196bef1e" Mar 14 09:20:39 crc kubenswrapper[4869]: I0314 09:20:39.990962 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:20:39 crc kubenswrapper[4869]: E0314 09:20:39.991231 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.012553 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="cinder-scheduler" containerID="cri-o://1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b" gracePeriod=30 Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.012747 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="probe" containerID="cri-o://7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead" gracePeriod=30 Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.046783 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.046763179 podStartE2EDuration="5.046763179s" podCreationTimestamp="2026-03-14 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:40.038038984 +0000 UTC m=+1393.010321057" watchObservedRunningTime="2026-03-14 09:20:40.046763179 +0000 UTC m=+1393.019045222" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.237538 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.238945 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f is running failed: container process not found" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.239298 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f is running failed: container process not found" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.239623 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f is running failed: container process not found" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.239652 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.425875 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75cd657fd5-hrb28"] Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.426414 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="init" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.426430 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="init" Mar 14 09:20:40 crc kubenswrapper[4869]: E0314 09:20:40.426457 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="dnsmasq-dns" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.426465 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="dnsmasq-dns" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.426736 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1d19a6-ed3e-43b4-b629-df50141d4ae8" containerName="dnsmasq-dns" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.427998 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.430982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.431049 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.436187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cd657fd5-hrb28"] Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.579825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-httpd-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.579899 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-internal-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.579971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.580040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-combined-ca-bundle\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.580109 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-ovndb-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.580140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-public-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.580252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsppd\" (UniqueName: \"kubernetes.io/projected/cc3b5757-7791-4168-9d0b-0425525fc6b9-kube-api-access-wsppd\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsppd\" (UniqueName: \"kubernetes.io/projected/cc3b5757-7791-4168-9d0b-0425525fc6b9-kube-api-access-wsppd\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-httpd-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-internal-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-combined-ca-bundle\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-ovndb-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.683396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-public-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.694941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-ovndb-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.699296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-internal-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.704229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-public-tls-certs\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.701280 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-combined-ca-bundle\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.705586 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.710851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsppd\" (UniqueName: \"kubernetes.io/projected/cc3b5757-7791-4168-9d0b-0425525fc6b9-kube-api-access-wsppd\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.720267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc3b5757-7791-4168-9d0b-0425525fc6b9-httpd-config\") pod \"neutron-75cd657fd5-hrb28\" (UID: \"cc3b5757-7791-4168-9d0b-0425525fc6b9\") " pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:40 crc kubenswrapper[4869]: I0314 09:20:40.773039 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.023560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerStarted","Data":"6dcc1369b777b98fbbaf29434849b469f22f1300adbffe8d8da52febf4d4592a"} Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.024348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.027614 4869 generic.go:334] "Generic (PLEG): container finished" podID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerID="d67a8e08e67d1b5c0a287048b6089dc3a5d54301860c2fb3fb7cf233d15ef1f6" exitCode=0 Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.027665 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerDied","Data":"d67a8e08e67d1b5c0a287048b6089dc3a5d54301860c2fb3fb7cf233d15ef1f6"} Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.027687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerStarted","Data":"294ef10949f3b2561553f5e1a6547394e9a8c1ed183f46add858a1790ad88c8f"} Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.028557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.038255 4869 generic.go:334] "Generic (PLEG): container finished" podID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerID="7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead" exitCode=0 Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.038361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerDied","Data":"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead"} Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.040523 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e0825fa-2453-46a0-b677-79808694bba8" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" exitCode=1 Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.040581 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerDied","Data":"264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f"} Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.040615 4869 scope.go:117] "RemoveContainer" containerID="a94df3683f4de701c59fa43e469daf0695f9b06083105d6ba6172c5e734f3124" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.041301 4869 scope.go:117] "RemoveContainer" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" Mar 14 09:20:41 crc kubenswrapper[4869]: E0314 09:20:41.041581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0e0825fa-2453-46a0-b677-79808694bba8)\"" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.065929 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-786bc4c684-kzltd" podStartSLOduration=3.065906027 podStartE2EDuration="3.065906027s" podCreationTimestamp="2026-03-14 09:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:41.052776044 +0000 UTC m=+1394.025058097" watchObservedRunningTime="2026-03-14 09:20:41.065906027 +0000 UTC m=+1394.038188080" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.501244 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" podStartSLOduration=3.501222819 podStartE2EDuration="3.501222819s" podCreationTimestamp="2026-03-14 09:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:41.117776117 +0000 UTC m=+1394.090058180" watchObservedRunningTime="2026-03-14 09:20:41.501222819 +0000 UTC m=+1394.473504872" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.512037 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75cd657fd5-hrb28"] Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.721339 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.820378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.820732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.820815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.820925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.821057 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.821090 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rd7d\" (UniqueName: \"kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d\") pod \"f39f4b10-107f-4919-bcf6-820efd1b82ff\" (UID: \"f39f4b10-107f-4919-bcf6-820efd1b82ff\") " Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.823022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.829263 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts" (OuterVolumeSpecName: "scripts") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.830041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.844659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d" (OuterVolumeSpecName: "kube-api-access-5rd7d") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "kube-api-access-5rd7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.900728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.925971 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.926020 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.926032 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.926040 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rd7d\" (UniqueName: \"kubernetes.io/projected/f39f4b10-107f-4919-bcf6-820efd1b82ff-kube-api-access-5rd7d\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.926053 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f39f4b10-107f-4919-bcf6-820efd1b82ff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:41 crc kubenswrapper[4869]: I0314 09:20:41.975614 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data" (OuterVolumeSpecName: "config-data") pod "f39f4b10-107f-4919-bcf6-820efd1b82ff" (UID: "f39f4b10-107f-4919-bcf6-820efd1b82ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.027449 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39f4b10-107f-4919-bcf6-820efd1b82ff-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.081369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cd657fd5-hrb28" event={"ID":"cc3b5757-7791-4168-9d0b-0425525fc6b9","Type":"ContainerStarted","Data":"31e2bb86d2ebbde919008403e68bd236eeac90a91a93b11efe8beb45277a6adf"} Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.081423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cd657fd5-hrb28" event={"ID":"cc3b5757-7791-4168-9d0b-0425525fc6b9","Type":"ContainerStarted","Data":"b342cbb99e1ca5eeb57e5a86c9008cdf38652db2219b475dfa669a40b87d71df"} Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.089172 4869 generic.go:334] "Generic (PLEG): container finished" podID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerID="1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b" exitCode=0 Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.089233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerDied","Data":"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b"} Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.089261 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f39f4b10-107f-4919-bcf6-820efd1b82ff","Type":"ContainerDied","Data":"792415e97d3cc67b630a19cfc6d5346298a605ea73efbafee46109bb5d2b48d3"} Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.089277 4869 scope.go:117] "RemoveContainer" containerID="7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.089382 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.197009 4869 scope.go:117] "RemoveContainer" containerID="1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.212581 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.229014 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.238545 4869 scope.go:117] "RemoveContainer" containerID="7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.239181 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:42 crc kubenswrapper[4869]: E0314 09:20:42.239774 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="probe" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.239799 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="probe" Mar 14 09:20:42 crc kubenswrapper[4869]: E0314 09:20:42.239827 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="cinder-scheduler" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.239836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="cinder-scheduler" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.240109 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="probe" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.240138 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" containerName="cinder-scheduler" Mar 14 09:20:42 crc kubenswrapper[4869]: E0314 09:20:42.240743 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead\": container with ID starting with 7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead not found: ID does not exist" containerID="7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.240775 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead"} err="failed to get container status \"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead\": rpc error: code = NotFound desc = could not find container \"7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead\": container with ID starting with 7a7d5dbe6697b65f44a756a788c32b270f19b7ac2742108bddbb16deb1b04ead not found: ID does not exist" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.240799 4869 scope.go:117] "RemoveContainer" containerID="1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b" Mar 14 09:20:42 crc kubenswrapper[4869]: E0314 09:20:42.241059 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b\": container with ID starting with 1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b not found: ID does not exist" containerID="1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.241082 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b"} err="failed to get container status \"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b\": rpc error: code = NotFound desc = could not find container \"1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b\": container with ID starting with 1a1a234a3236e8f8e7c234ce6fd5d4483840eaf3621e202ef535cc31a59ee78b not found: ID does not exist" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.241459 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.244002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.253656 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.339341 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.339768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9blh\" (UniqueName: \"kubernetes.io/projected/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-kube-api-access-q9blh\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.339807 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.339868 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.339950 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.340187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441875 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9blh\" (UniqueName: \"kubernetes.io/projected/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-kube-api-access-q9blh\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.441967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.443331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.450478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.454997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-scripts\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.466202 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9blh\" (UniqueName: \"kubernetes.io/projected/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-kube-api-access-q9blh\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.469225 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.469701 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3-config-data\") pod \"cinder-scheduler-0\" (UID: \"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3\") " pod="openstack/cinder-scheduler-0" Mar 14 09:20:42 crc kubenswrapper[4869]: I0314 09:20:42.572826 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 14 09:20:43 crc kubenswrapper[4869]: I0314 09:20:43.112750 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 14 09:20:43 crc kubenswrapper[4869]: I0314 09:20:43.126450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75cd657fd5-hrb28" event={"ID":"cc3b5757-7791-4168-9d0b-0425525fc6b9","Type":"ContainerStarted","Data":"4f0354bc27282d8774cc47c110d74a2d4e093cec0e5a52ebb86540a56aadbd72"} Mar 14 09:20:43 crc kubenswrapper[4869]: I0314 09:20:43.126539 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:20:43 crc kubenswrapper[4869]: I0314 09:20:43.161232 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75cd657fd5-hrb28" podStartSLOduration=3.16120919 podStartE2EDuration="3.16120919s" podCreationTimestamp="2026-03-14 09:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:43.153762946 +0000 UTC m=+1396.126045029" watchObservedRunningTime="2026-03-14 09:20:43.16120919 +0000 UTC m=+1396.133491253" Mar 14 09:20:43 crc kubenswrapper[4869]: I0314 09:20:43.716690 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39f4b10-107f-4919-bcf6-820efd1b82ff" path="/var/lib/kubelet/pods/f39f4b10-107f-4919-bcf6-820efd1b82ff/volumes" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.146109 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3","Type":"ContainerStarted","Data":"ca626a6ca23c60643b21218e31bc94adff8c99be6b1812c142aee0288642a91c"} Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.146153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3","Type":"ContainerStarted","Data":"3db76d51ab03d07cbc6bddeaa2993850807a9d2fcdd8a9c7bcc21cf828b26507"} Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.149490 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" exitCode=1 Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.149549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b"} Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.149857 4869 scope.go:117] "RemoveContainer" containerID="6a05c7fd61f3133eb11294b7d7a7eb6fdeb39b8f8720b8ed360334d6d99854a3" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.150837 4869 scope.go:117] "RemoveContainer" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" Mar 14 09:20:44 crc kubenswrapper[4869]: E0314 09:20:44.151224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.405136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.405182 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.538617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.539606 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:20:44 crc kubenswrapper[4869]: E0314 09:20:44.539858 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:44 crc kubenswrapper[4869]: I0314 09:20:44.543542 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.168032 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-74555fbb85-j9lkj"] Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.171685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3","Type":"ContainerStarted","Data":"530d160511d6ab1dc4305ba28bebee71e0a79d5403dbe3a929119fdda4f288c1"} Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.171767 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.178829 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:20:45 crc kubenswrapper[4869]: E0314 09:20:45.179029 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.179662 4869 scope.go:117] "RemoveContainer" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" Mar 14 09:20:45 crc kubenswrapper[4869]: E0314 09:20:45.179866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.180085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.180232 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.180399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.182217 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74555fbb85-j9lkj"] Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.209380 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.209355149 podStartE2EDuration="3.209355149s" podCreationTimestamp="2026-03-14 09:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:45.197623569 +0000 UTC m=+1398.169905642" watchObservedRunningTime="2026-03-14 09:20:45.209355149 +0000 UTC m=+1398.181637202" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.307778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-internal-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.307932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-run-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.308003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-combined-ca-bundle\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.308114 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-etc-swift\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.308298 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-config-data\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.308496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2nd5\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-kube-api-access-z2nd5\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.308600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-log-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.309016 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-public-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2nd5\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-kube-api-access-z2nd5\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-log-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411285 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-public-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-internal-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-run-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-combined-ca-bundle\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-etc-swift\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.411477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-config-data\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.416058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-run-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.417924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-log-httpd\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.421589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-public-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.422734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-etc-swift\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.423191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-internal-tls-certs\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.428558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-combined-ca-bundle\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.429251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-config-data\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.433160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2nd5\" (UniqueName: \"kubernetes.io/projected/c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46-kube-api-access-z2nd5\") pod \"swift-proxy-74555fbb85-j9lkj\" (UID: \"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46\") " pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:45 crc kubenswrapper[4869]: I0314 09:20:45.502385 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:46 crc kubenswrapper[4869]: I0314 09:20:46.071140 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-74555fbb85-j9lkj"] Mar 14 09:20:46 crc kubenswrapper[4869]: I0314 09:20:46.547535 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:20:47 crc kubenswrapper[4869]: I0314 09:20:47.573346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 14 09:20:48 crc kubenswrapper[4869]: I0314 09:20:48.566284 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:20:48 crc kubenswrapper[4869]: I0314 09:20:48.654819 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:20:48 crc kubenswrapper[4869]: I0314 09:20:48.655064 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="dnsmasq-dns" containerID="cri-o://cb9656cbe4b554a608da488ef7353dcb98b0c014c1540780279fb41d9f3d109b" gracePeriod=10 Mar 14 09:20:48 crc kubenswrapper[4869]: I0314 09:20:48.937951 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.168832 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.169152 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-central-agent" containerID="cri-o://0593c0e4477c49653279421c2cb2e157da54aafd010b2b01509239c45b039657" gracePeriod=30 Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.170359 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="proxy-httpd" containerID="cri-o://0f0aecb9e45b2ab6579445ab476a5b527abac9725dfc454b404f5df2c283eb93" gracePeriod=30 Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.170445 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="sg-core" containerID="cri-o://265f4c705ccdc82ce7cc6d355c6fccdd972d2c0deba1c2173a04c40ad5f72ef2" gracePeriod=30 Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.170558 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-notification-agent" containerID="cri-o://e5730cfe5eb7ea5c7d4db6f8025ceaf8e5cf6488d4738dc4439d1b102009eecf" gracePeriod=30 Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.218530 4869 generic.go:334] "Generic (PLEG): container finished" podID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerID="cb9656cbe4b554a608da488ef7353dcb98b0c014c1540780279fb41d9f3d109b" exitCode=0 Mar 14 09:20:49 crc kubenswrapper[4869]: I0314 09:20:49.218560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" event={"ID":"d0a3057f-b699-4f14-bfa0-7bda292b3c82","Type":"ContainerDied","Data":"cb9656cbe4b554a608da488ef7353dcb98b0c014c1540780279fb41d9f3d109b"} Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.236806 4869 generic.go:334] "Generic (PLEG): container finished" podID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerID="0f0aecb9e45b2ab6579445ab476a5b527abac9725dfc454b404f5df2c283eb93" exitCode=0 Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237114 4869 generic.go:334] "Generic (PLEG): container finished" podID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerID="265f4c705ccdc82ce7cc6d355c6fccdd972d2c0deba1c2173a04c40ad5f72ef2" exitCode=2 Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237123 4869 generic.go:334] "Generic (PLEG): container finished" podID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerID="0593c0e4477c49653279421c2cb2e157da54aafd010b2b01509239c45b039657" exitCode=0 Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerDied","Data":"0f0aecb9e45b2ab6579445ab476a5b527abac9725dfc454b404f5df2c283eb93"} Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerDied","Data":"265f4c705ccdc82ce7cc6d355c6fccdd972d2c0deba1c2173a04c40ad5f72ef2"} Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerDied","Data":"0593c0e4477c49653279421c2cb2e157da54aafd010b2b01509239c45b039657"} Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.237729 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:20:50 crc kubenswrapper[4869]: I0314 09:20:50.238547 4869 scope.go:117] "RemoveContainer" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" Mar 14 09:20:50 crc kubenswrapper[4869]: E0314 09:20:50.238813 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0e0825fa-2453-46a0-b677-79808694bba8)\"" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" Mar 14 09:20:51 crc kubenswrapper[4869]: I0314 09:20:51.476686 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Mar 14 09:20:52 crc kubenswrapper[4869]: I0314 09:20:52.260816 4869 generic.go:334] "Generic (PLEG): container finished" podID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerID="e5730cfe5eb7ea5c7d4db6f8025ceaf8e5cf6488d4738dc4439d1b102009eecf" exitCode=0 Mar 14 09:20:52 crc kubenswrapper[4869]: I0314 09:20:52.260846 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerDied","Data":"e5730cfe5eb7ea5c7d4db6f8025ceaf8e5cf6488d4738dc4439d1b102009eecf"} Mar 14 09:20:53 crc kubenswrapper[4869]: I0314 09:20:53.029315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 14 09:20:54 crc kubenswrapper[4869]: I0314 09:20:54.171631 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:54 crc kubenswrapper[4869]: I0314 09:20:54.172454 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" containerName="kube-state-metrics" containerID="cri-o://56e97c1ac5294487a499e77ce4369a79ab53794b545c4ba6b799ca5155dcaf3f" gracePeriod=30 Mar 14 09:20:55 crc kubenswrapper[4869]: I0314 09:20:55.295453 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" containerID="56e97c1ac5294487a499e77ce4369a79ab53794b545c4ba6b799ca5155dcaf3f" exitCode=2 Mar 14 09:20:55 crc kubenswrapper[4869]: I0314 09:20:55.297413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd","Type":"ContainerDied","Data":"56e97c1ac5294487a499e77ce4369a79ab53794b545c4ba6b799ca5155dcaf3f"} Mar 14 09:20:55 crc kubenswrapper[4869]: E0314 09:20:55.630136 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Mar 14 09:20:55 crc kubenswrapper[4869]: E0314 09:20:55.630465 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.153:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Mar 14 09:20:55 crc kubenswrapper[4869]: E0314 09:20:55.630686 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.153:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b4hdch79hd9hfbh565h68h78h5c5h698h658h64fh646h8ch54hfdh677h675hcchddh5f6h7fh67fh5cbh664h598h89h566h8bh5c5h58dh5fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q2cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(35c6d1fd-be8f-4390-9199-bf573760717b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 14 09:20:55 crc kubenswrapper[4869]: E0314 09:20:55.632316 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="35c6d1fd-be8f-4390-9199-bf573760717b" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.164222 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.271624 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2c5b\" (UniqueName: \"kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b\") pod \"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd\" (UID: \"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.281772 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b" (OuterVolumeSpecName: "kube-api-access-d2c5b") pod "3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" (UID: "3c3c8469-db90-4d75-bfb4-6f2be6ee77bd"). InnerVolumeSpecName "kube-api-access-d2c5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.306603 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.312991 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.321833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"237e08cf-4de9-4873-925d-8502d2e2abe5","Type":"ContainerDied","Data":"b6f3e4648f88aab3bc9168d26d16b82a55914ac4120fde7a6d48237a21f9352a"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.321888 4869 scope.go:117] "RemoveContainer" containerID="0f0aecb9e45b2ab6579445ab476a5b527abac9725dfc454b404f5df2c283eb93" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.322028 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.330911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" event={"ID":"d0a3057f-b699-4f14-bfa0-7bda292b3c82","Type":"ContainerDied","Data":"816348e27aae2facdbb375ba1001fd7353da2207b0a1c3b1e189f9ffc84b11be"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.330987 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b869c6f79-cntzf" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.333798 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3c3c8469-db90-4d75-bfb4-6f2be6ee77bd","Type":"ContainerDied","Data":"a6c6ae3359ee53a5e6c33e2c2872c4d0907cbec8a893783a1236f3f035d1b40a"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.333881 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.348670 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74555fbb85-j9lkj" event={"ID":"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46","Type":"ContainerStarted","Data":"c78eaea34156aee9ba4fccb9795db8f25e5ba6f3bf276b55962da64ddb6311a6"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.348707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74555fbb85-j9lkj" event={"ID":"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46","Type":"ContainerStarted","Data":"1eb1776562a59d2e32586df1bb8d8b129289367bb0a157de9138d716f132ee09"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.348718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-74555fbb85-j9lkj" event={"ID":"c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46","Type":"ContainerStarted","Data":"dcfb1f62a7dc24f7db0a448f5a9c9e8c35e4d76f8b4f0d4a4d7aee1bc1b7fb35"} Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.348765 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.348782 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.376764 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2c5b\" (UniqueName: \"kubernetes.io/projected/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd-kube-api-access-d2c5b\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.379807 4869 scope.go:117] "RemoveContainer" containerID="265f4c705ccdc82ce7cc6d355c6fccdd972d2c0deba1c2173a04c40ad5f72ef2" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.379874 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.153:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="35c6d1fd-be8f-4390-9199-bf573760717b" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.422768 4869 scope.go:117] "RemoveContainer" containerID="e5730cfe5eb7ea5c7d4db6f8025ceaf8e5cf6488d4738dc4439d1b102009eecf" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.449746 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-74555fbb85-j9lkj" podStartSLOduration=11.449724749 podStartE2EDuration="11.449724749s" podCreationTimestamp="2026-03-14 09:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:20:56.423149453 +0000 UTC m=+1409.395431496" watchObservedRunningTime="2026-03-14 09:20:56.449724749 +0000 UTC m=+1409.422006812" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.455949 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.458707 4869 scope.go:117] "RemoveContainer" containerID="0593c0e4477c49653279421c2cb2e157da54aafd010b2b01509239c45b039657" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.479731 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480695 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgzxf\" (UniqueName: \"kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480787 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480841 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480913 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb\") pod \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\" (UID: \"d0a3057f-b699-4f14-bfa0-7bda292b3c82\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjhck\" (UniqueName: \"kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.480983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.481044 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.481110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.481172 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data\") pod \"237e08cf-4de9-4873-925d-8502d2e2abe5\" (UID: \"237e08cf-4de9-4873-925d-8502d2e2abe5\") " Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.481951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.483957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.496703 4869 scope.go:117] "RemoveContainer" containerID="cb9656cbe4b554a608da488ef7353dcb98b0c014c1540780279fb41d9f3d109b" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.496909 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497361 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="proxy-httpd" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497380 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="proxy-httpd" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497398 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" containerName="kube-state-metrics" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497406 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" containerName="kube-state-metrics" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497432 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-notification-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497440 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-notification-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497454 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="sg-core" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497461 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="sg-core" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497477 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="dnsmasq-dns" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497486 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="dnsmasq-dns" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497523 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-central-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497532 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-central-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: E0314 09:20:56.497549 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="init" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="init" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497813 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="proxy-httpd" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497827 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" containerName="dnsmasq-dns" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497839 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="sg-core" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497866 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-central-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497876 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" containerName="kube-state-metrics" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.497887 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" containerName="ceilometer-notification-agent" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.498649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.498734 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.502543 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck" (OuterVolumeSpecName: "kube-api-access-pjhck") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "kube-api-access-pjhck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.502889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.502929 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.504103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts" (OuterVolumeSpecName: "scripts") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.509213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf" (OuterVolumeSpecName: "kube-api-access-dgzxf") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "kube-api-access-dgzxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.525434 4869 scope.go:117] "RemoveContainer" containerID="89ac629ee8f19097ea11745c28f503e92f7315fda111195b229df18b85d22fff" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.566432 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.573756 4869 scope.go:117] "RemoveContainer" containerID="56e97c1ac5294487a499e77ce4369a79ab53794b545c4ba6b799ca5155dcaf3f" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.584755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.584956 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7mm\" (UniqueName: \"kubernetes.io/projected/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-api-access-9c7mm\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.584995 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585120 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585132 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585141 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgzxf\" (UniqueName: \"kubernetes.io/projected/d0a3057f-b699-4f14-bfa0-7bda292b3c82-kube-api-access-dgzxf\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585151 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/237e08cf-4de9-4873-925d-8502d2e2abe5-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585159 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.585168 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjhck\" (UniqueName: \"kubernetes.io/projected/237e08cf-4de9-4873-925d-8502d2e2abe5-kube-api-access-pjhck\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.601589 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.604684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config" (OuterVolumeSpecName: "config") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.608822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.626085 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.644720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.647254 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0a3057f-b699-4f14-bfa0-7bda292b3c82" (UID: "d0a3057f-b699-4f14-bfa0-7bda292b3c82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.679560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data" (OuterVolumeSpecName: "config-data") pod "237e08cf-4de9-4873-925d-8502d2e2abe5" (UID: "237e08cf-4de9-4873-925d-8502d2e2abe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.686710 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7mm\" (UniqueName: \"kubernetes.io/projected/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-api-access-9c7mm\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.686792 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.686841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.686924 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687086 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687110 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687123 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687138 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0a3057f-b699-4f14-bfa0-7bda292b3c82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687152 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687163 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.687185 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/237e08cf-4de9-4873-925d-8502d2e2abe5-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.691344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.691814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.695111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.706404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7mm\" (UniqueName: \"kubernetes.io/projected/b2ebe80d-8ef3-4dac-b796-1c0ced4ad905-kube-api-access-9c7mm\") pod \"kube-state-metrics-0\" (UID: \"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905\") " pod="openstack/kube-state-metrics-0" Mar 14 09:20:56 crc kubenswrapper[4869]: I0314 09:20:56.829104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.005702 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.012047 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b869c6f79-cntzf"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.033734 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.062051 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.080692 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.083131 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.086778 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.086946 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.087107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.092371 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196749 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxk2s\" (UniqueName: \"kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.196988 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.197281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299215 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxk2s\" (UniqueName: \"kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299669 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.299723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.300057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.300426 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.308853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.310137 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.310924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.317534 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.317788 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.322448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxk2s\" (UniqueName: \"kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s\") pod \"ceilometer-0\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.336621 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.373389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905","Type":"ContainerStarted","Data":"ff2b6f07addaa8474a6f93d1afa58a014b5d41c189721e73ce6e11f22693a44d"} Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.402134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.727177 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="237e08cf-4de9-4873-925d-8502d2e2abe5" path="/var/lib/kubelet/pods/237e08cf-4de9-4873-925d-8502d2e2abe5/volumes" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.728378 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3c8469-db90-4d75-bfb4-6f2be6ee77bd" path="/var/lib/kubelet/pods/3c3c8469-db90-4d75-bfb4-6f2be6ee77bd/volumes" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.729020 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a3057f-b699-4f14-bfa0-7bda292b3c82" path="/var/lib/kubelet/pods/d0a3057f-b699-4f14-bfa0-7bda292b3c82/volumes" Mar 14 09:20:57 crc kubenswrapper[4869]: I0314 09:20:57.890242 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:20:57 crc kubenswrapper[4869]: W0314 09:20:57.894797 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad503a03_3257_45a6_b1c0_d83794238d40.slice/crio-92d1a9efac2b3954ee9f3afb6f227e14c97e4d7c34e21af19c9a5899bd8817d8 WatchSource:0}: Error finding container 92d1a9efac2b3954ee9f3afb6f227e14c97e4d7c34e21af19c9a5899bd8817d8: Status 404 returned error can't find the container with id 92d1a9efac2b3954ee9f3afb6f227e14c97e4d7c34e21af19c9a5899bd8817d8 Mar 14 09:20:58 crc kubenswrapper[4869]: I0314 09:20:58.393275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerStarted","Data":"92d1a9efac2b3954ee9f3afb6f227e14c97e4d7c34e21af19c9a5899bd8817d8"} Mar 14 09:20:58 crc kubenswrapper[4869]: I0314 09:20:58.704122 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:20:58 crc kubenswrapper[4869]: E0314 09:20:58.704550 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:20:59 crc kubenswrapper[4869]: I0314 09:20:59.415968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b2ebe80d-8ef3-4dac-b796-1c0ced4ad905","Type":"ContainerStarted","Data":"de40b57793f1c9326a116cdd8850f7a007cbf3b00c5b7b70782391472b884ea1"} Mar 14 09:20:59 crc kubenswrapper[4869]: I0314 09:20:59.416157 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 14 09:20:59 crc kubenswrapper[4869]: I0314 09:20:59.419187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerStarted","Data":"0a6937cb71ff7f7b36f7de491f4a4976cc497f5b106efa59ed8c100a934513e9"} Mar 14 09:20:59 crc kubenswrapper[4869]: I0314 09:20:59.439326 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.060687024 podStartE2EDuration="3.439300307s" podCreationTimestamp="2026-03-14 09:20:56 +0000 UTC" firstStartedPulling="2026-03-14 09:20:57.339985055 +0000 UTC m=+1410.312267108" lastFinishedPulling="2026-03-14 09:20:57.718598338 +0000 UTC m=+1410.690880391" observedRunningTime="2026-03-14 09:20:59.431636238 +0000 UTC m=+1412.403918311" watchObservedRunningTime="2026-03-14 09:20:59.439300307 +0000 UTC m=+1412.411582360" Mar 14 09:20:59 crc kubenswrapper[4869]: I0314 09:20:59.703711 4869 scope.go:117] "RemoveContainer" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" Mar 14 09:20:59 crc kubenswrapper[4869]: E0314 09:20:59.703997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:21:00 crc kubenswrapper[4869]: I0314 09:21:00.237427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:00 crc kubenswrapper[4869]: I0314 09:21:00.237479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:00 crc kubenswrapper[4869]: I0314 09:21:00.238315 4869 scope.go:117] "RemoveContainer" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" Mar 14 09:21:00 crc kubenswrapper[4869]: I0314 09:21:00.409777 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:00 crc kubenswrapper[4869]: I0314 09:21:00.520605 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:21:01 crc kubenswrapper[4869]: I0314 09:21:01.448710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerStarted","Data":"bd52d27675f2545cb3dfd99c38d779150fce0a328987383bcb00e27d75c18dfe"} Mar 14 09:21:01 crc kubenswrapper[4869]: I0314 09:21:01.450808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerStarted","Data":"9923aa22b54d458de5c94fb65fafeead5d7e83be2f25052935877737c5b05974"} Mar 14 09:21:01 crc kubenswrapper[4869]: I0314 09:21:01.702433 4869 scope.go:117] "RemoveContainer" containerID="f80eb8714d645107731349ccd3eb7bf1625a24d510600e59922765603d4dcabe" Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.470388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerStarted","Data":"c3fc84d0c02d7cd743d56c7f0309bdbcf76b5fafc610ceb51c06b93a7d956cf5"} Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.865928 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ld855"] Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.867310 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.882760 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ld855"] Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.970872 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x7vlw"] Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.972259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.986609 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2437-account-create-update-qbr82"] Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.988254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.991126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 14 09:21:02 crc kubenswrapper[4869]: I0314 09:21:02.996052 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x7vlw"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.004752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2437-account-create-update-qbr82"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.024066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfrc\" (UniqueName: \"kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.024475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfrc\" (UniqueName: \"kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127350 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28p5j\" (UniqueName: \"kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nmk\" (UniqueName: \"kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.127406 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.128215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.160257 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfrc\" (UniqueName: \"kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc\") pod \"nova-api-db-create-ld855\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.169642 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4s89j"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.170998 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.201071 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-638c-account-create-update-c76x8"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.201555 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.233327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28p5j\" (UniqueName: \"kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.233372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9nmk\" (UniqueName: \"kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.233415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.233499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.234329 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.235401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4s89j"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.235533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.236922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.245890 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.272877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9nmk\" (UniqueName: \"kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk\") pod \"nova-api-2437-account-create-update-qbr82\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.278119 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-638c-account-create-update-c76x8"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.294054 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28p5j\" (UniqueName: \"kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j\") pod \"nova-cell0-db-create-x7vlw\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.319037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.345281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.345370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.345450 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjvz\" (UniqueName: \"kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.345477 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphlc\" (UniqueName: \"kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.379199 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c3ab-account-create-update-bscnd"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.385645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.389210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.399569 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c3ab-account-create-update-bscnd"] Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.448129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.448563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.448625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjvz\" (UniqueName: \"kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.448649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphlc\" (UniqueName: \"kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.449919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.451190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.497351 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjvz\" (UniqueName: \"kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz\") pod \"nova-cell0-638c-account-create-update-c76x8\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.497652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphlc\" (UniqueName: \"kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc\") pod \"nova-cell1-db-create-4s89j\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.553900 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqgk\" (UniqueName: \"kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.553971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.589772 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.656019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.656254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqgk\" (UniqueName: \"kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.657431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.676063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqgk\" (UniqueName: \"kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk\") pod \"nova-cell1-c3ab-account-create-update-bscnd\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.732885 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.754060 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.800878 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:03 crc kubenswrapper[4869]: I0314 09:21:03.891715 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ld855"] Mar 14 09:21:03 crc kubenswrapper[4869]: W0314 09:21:03.900315 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a951e5_a6d1_4a1c_88ba_ed578c547d55.slice/crio-727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0 WatchSource:0}: Error finding container 727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0: Status 404 returned error can't find the container with id 727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0 Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.018995 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2437-account-create-update-qbr82"] Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.203992 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x7vlw"] Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.438665 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-638c-account-create-update-c76x8"] Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.453218 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4s89j"] Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.526149 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x7vlw" event={"ID":"0263a6bb-e3ac-4eff-9021-c82a555ae52b","Type":"ContainerStarted","Data":"93f05cde351fce14f854f4912ad55fea30696dd15f9ec1af8fd39ee9b4ee1bf4"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.530294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-638c-account-create-update-c76x8" event={"ID":"0c679d2d-1e39-47a5-b4cf-dba3430a25d9","Type":"ContainerStarted","Data":"affd815c228edc673ecbd0214d852cc45516465c8cdb94a8794b94dd3464eec7"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.543492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2437-account-create-update-qbr82" event={"ID":"8bdc2944-fc75-4309-a83f-3a3087099231","Type":"ContainerStarted","Data":"964c988c7c0652c1bf202bcae8a36d8f0c7057e28276e7f14b75ff217e7b1e02"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.543576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2437-account-create-update-qbr82" event={"ID":"8bdc2944-fc75-4309-a83f-3a3087099231","Type":"ContainerStarted","Data":"c320b403f0dae34163dec620e01229cd5d0c2ce527f4f86096cab25682240af0"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.545551 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4s89j" event={"ID":"ab7abd39-848f-41f5-9064-6219922e9684","Type":"ContainerStarted","Data":"ad842ef6395cad6af1eed50080d8078211bf4ebdb18f0f87bece3070f499130d"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.553123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerStarted","Data":"da2130f36d90e55d25585cb8b65fa57e5560e52ad7c3025f41736a9e84d96cba"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.554043 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-central-agent" containerID="cri-o://0a6937cb71ff7f7b36f7de491f4a4976cc497f5b106efa59ed8c100a934513e9" gracePeriod=30 Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.556502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.556621 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="proxy-httpd" containerID="cri-o://da2130f36d90e55d25585cb8b65fa57e5560e52ad7c3025f41736a9e84d96cba" gracePeriod=30 Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.557600 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-notification-agent" containerID="cri-o://9923aa22b54d458de5c94fb65fafeead5d7e83be2f25052935877737c5b05974" gracePeriod=30 Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.557755 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="sg-core" containerID="cri-o://c3fc84d0c02d7cd743d56c7f0309bdbcf76b5fafc610ceb51c06b93a7d956cf5" gracePeriod=30 Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.580265 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2437-account-create-update-qbr82" podStartSLOduration=2.580241371 podStartE2EDuration="2.580241371s" podCreationTimestamp="2026-03-14 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:04.558125086 +0000 UTC m=+1417.530407139" watchObservedRunningTime="2026-03-14 09:21:04.580241371 +0000 UTC m=+1417.552523424" Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.613366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ld855" event={"ID":"47a951e5-a6d1-4a1c-88ba-ed578c547d55","Type":"ContainerStarted","Data":"a4e94d05ebee941a175ce63cc676295b49ac9994b26437e9a468730c00253bfc"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.613419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ld855" event={"ID":"47a951e5-a6d1-4a1c-88ba-ed578c547d55","Type":"ContainerStarted","Data":"727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0"} Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.624162 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9519221820000001 podStartE2EDuration="7.624143025s" podCreationTimestamp="2026-03-14 09:20:57 +0000 UTC" firstStartedPulling="2026-03-14 09:20:57.897898903 +0000 UTC m=+1410.870180956" lastFinishedPulling="2026-03-14 09:21:03.570119746 +0000 UTC m=+1416.542401799" observedRunningTime="2026-03-14 09:21:04.606041308 +0000 UTC m=+1417.578323371" watchObservedRunningTime="2026-03-14 09:21:04.624143025 +0000 UTC m=+1417.596425078" Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.633360 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-ld855" podStartSLOduration=2.633337481 podStartE2EDuration="2.633337481s" podCreationTimestamp="2026-03-14 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:04.627812785 +0000 UTC m=+1417.600094838" watchObservedRunningTime="2026-03-14 09:21:04.633337481 +0000 UTC m=+1417.605619554" Mar 14 09:21:04 crc kubenswrapper[4869]: I0314 09:21:04.672853 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c3ab-account-create-update-bscnd"] Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.516068 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-74555fbb85-j9lkj" Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.638698 4869 generic.go:334] "Generic (PLEG): container finished" podID="8bdc2944-fc75-4309-a83f-3a3087099231" containerID="964c988c7c0652c1bf202bcae8a36d8f0c7057e28276e7f14b75ff217e7b1e02" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.638779 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2437-account-create-update-qbr82" event={"ID":"8bdc2944-fc75-4309-a83f-3a3087099231","Type":"ContainerDied","Data":"964c988c7c0652c1bf202bcae8a36d8f0c7057e28276e7f14b75ff217e7b1e02"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.651292 4869 generic.go:334] "Generic (PLEG): container finished" podID="ab7abd39-848f-41f5-9064-6219922e9684" containerID="08c047c85b1a4aa9ad4956925b6865625b3aa5872ab709dec24957fd67464a75" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.651595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4s89j" event={"ID":"ab7abd39-848f-41f5-9064-6219922e9684","Type":"ContainerDied","Data":"08c047c85b1a4aa9ad4956925b6865625b3aa5872ab709dec24957fd67464a75"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.696890 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad503a03-3257-45a6-b1c0-d83794238d40" containerID="da2130f36d90e55d25585cb8b65fa57e5560e52ad7c3025f41736a9e84d96cba" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.696952 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad503a03-3257-45a6-b1c0-d83794238d40" containerID="c3fc84d0c02d7cd743d56c7f0309bdbcf76b5fafc610ceb51c06b93a7d956cf5" exitCode=2 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.696965 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad503a03-3257-45a6-b1c0-d83794238d40" containerID="9923aa22b54d458de5c94fb65fafeead5d7e83be2f25052935877737c5b05974" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.697044 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerDied","Data":"da2130f36d90e55d25585cb8b65fa57e5560e52ad7c3025f41736a9e84d96cba"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.697088 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerDied","Data":"c3fc84d0c02d7cd743d56c7f0309bdbcf76b5fafc610ceb51c06b93a7d956cf5"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.697104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerDied","Data":"9923aa22b54d458de5c94fb65fafeead5d7e83be2f25052935877737c5b05974"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.699816 4869 generic.go:334] "Generic (PLEG): container finished" podID="47a951e5-a6d1-4a1c-88ba-ed578c547d55" containerID="a4e94d05ebee941a175ce63cc676295b49ac9994b26437e9a468730c00253bfc" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.700107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ld855" event={"ID":"47a951e5-a6d1-4a1c-88ba-ed578c547d55","Type":"ContainerDied","Data":"a4e94d05ebee941a175ce63cc676295b49ac9994b26437e9a468730c00253bfc"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.705682 4869 generic.go:334] "Generic (PLEG): container finished" podID="0263a6bb-e3ac-4eff-9021-c82a555ae52b" containerID="3933c0b08061ccd9547cc68e10e1e7c2fd62007fb8f3095fdfb6ea0ef8673d0d" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.713550 4869 generic.go:334] "Generic (PLEG): container finished" podID="5a73e307-e4ba-4102-b4d6-33897be89646" containerID="00e8fee75352381b15e48529901c7785caf83fc8989967a1c4adde529fd89fbc" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.722957 4869 generic.go:334] "Generic (PLEG): container finished" podID="0c679d2d-1e39-47a5-b4cf-dba3430a25d9" containerID="f71f10f72033c3aa58c92eb1143bcbca8cde4ed0d452b4a5e2a6dece3554d724" exitCode=0 Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.755964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x7vlw" event={"ID":"0263a6bb-e3ac-4eff-9021-c82a555ae52b","Type":"ContainerDied","Data":"3933c0b08061ccd9547cc68e10e1e7c2fd62007fb8f3095fdfb6ea0ef8673d0d"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.756016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" event={"ID":"5a73e307-e4ba-4102-b4d6-33897be89646","Type":"ContainerDied","Data":"00e8fee75352381b15e48529901c7785caf83fc8989967a1c4adde529fd89fbc"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.756030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" event={"ID":"5a73e307-e4ba-4102-b4d6-33897be89646","Type":"ContainerStarted","Data":"a9d79759761ae75ec5918c2edeec0a64bb4dde8fb6222c7f3d1a84fcfb419028"} Mar 14 09:21:05 crc kubenswrapper[4869]: I0314 09:21:05.756039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-638c-account-create-update-c76x8" event={"ID":"0c679d2d-1e39-47a5-b4cf-dba3430a25d9","Type":"ContainerDied","Data":"f71f10f72033c3aa58c92eb1143bcbca8cde4ed0d452b4a5e2a6dece3554d724"} Mar 14 09:21:06 crc kubenswrapper[4869]: I0314 09:21:06.739316 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad503a03-3257-45a6-b1c0-d83794238d40" containerID="0a6937cb71ff7f7b36f7de491f4a4976cc497f5b106efa59ed8c100a934513e9" exitCode=0 Mar 14 09:21:06 crc kubenswrapper[4869]: I0314 09:21:06.739833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerDied","Data":"0a6937cb71ff7f7b36f7de491f4a4976cc497f5b106efa59ed8c100a934513e9"} Mar 14 09:21:06 crc kubenswrapper[4869]: I0314 09:21:06.880530 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.206656 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257005 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257118 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257144 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxk2s\" (UniqueName: \"kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.257421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml\") pod \"ad503a03-3257-45a6-b1c0-d83794238d40\" (UID: \"ad503a03-3257-45a6-b1c0-d83794238d40\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.260345 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.266447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.292667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s" (OuterVolumeSpecName: "kube-api-access-xxk2s") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "kube-api-access-xxk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.300830 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts" (OuterVolumeSpecName: "scripts") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.333105 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.359657 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxk2s\" (UniqueName: \"kubernetes.io/projected/ad503a03-3257-45a6-b1c0-d83794238d40-kube-api-access-xxk2s\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.395815 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.395846 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.395858 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.395869 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad503a03-3257-45a6-b1c0-d83794238d40-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.404125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.444419 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.488591 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.498849 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.499452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.501204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data" (OuterVolumeSpecName: "config-data") pod "ad503a03-3257-45a6-b1c0-d83794238d40" (UID: "ad503a03-3257-45a6-b1c0-d83794238d40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.503480 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.512060 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.523414 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.544161 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmfrc\" (UniqueName: \"kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc\") pod \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pphlc\" (UniqueName: \"kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc\") pod \"ab7abd39-848f-41f5-9064-6219922e9684\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600566 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts\") pod \"5a73e307-e4ba-4102-b4d6-33897be89646\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts\") pod \"8bdc2944-fc75-4309-a83f-3a3087099231\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts\") pod \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts\") pod \"ab7abd39-848f-41f5-9064-6219922e9684\" (UID: \"ab7abd39-848f-41f5-9064-6219922e9684\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600686 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sqgk\" (UniqueName: \"kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk\") pod \"5a73e307-e4ba-4102-b4d6-33897be89646\" (UID: \"5a73e307-e4ba-4102-b4d6-33897be89646\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts\") pod \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\" (UID: \"47a951e5-a6d1-4a1c-88ba-ed578c547d55\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9nmk\" (UniqueName: \"kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk\") pod \"8bdc2944-fc75-4309-a83f-3a3087099231\" (UID: \"8bdc2944-fc75-4309-a83f-3a3087099231\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.600779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tjvz\" (UniqueName: \"kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz\") pod \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\" (UID: \"0c679d2d-1e39-47a5-b4cf-dba3430a25d9\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601190 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601210 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad503a03-3257-45a6-b1c0-d83794238d40-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bdc2944-fc75-4309-a83f-3a3087099231" (UID: "8bdc2944-fc75-4309-a83f-3a3087099231"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47a951e5-a6d1-4a1c-88ba-ed578c547d55" (UID: "47a951e5-a6d1-4a1c-88ba-ed578c547d55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a73e307-e4ba-4102-b4d6-33897be89646" (UID: "5a73e307-e4ba-4102-b4d6-33897be89646"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601850 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c679d2d-1e39-47a5-b4cf-dba3430a25d9" (UID: "0c679d2d-1e39-47a5-b4cf-dba3430a25d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.601996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab7abd39-848f-41f5-9064-6219922e9684" (UID: "ab7abd39-848f-41f5-9064-6219922e9684"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.604007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc" (OuterVolumeSpecName: "kube-api-access-pphlc") pod "ab7abd39-848f-41f5-9064-6219922e9684" (UID: "ab7abd39-848f-41f5-9064-6219922e9684"). InnerVolumeSpecName "kube-api-access-pphlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.607005 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk" (OuterVolumeSpecName: "kube-api-access-s9nmk") pod "8bdc2944-fc75-4309-a83f-3a3087099231" (UID: "8bdc2944-fc75-4309-a83f-3a3087099231"). InnerVolumeSpecName "kube-api-access-s9nmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.607175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk" (OuterVolumeSpecName: "kube-api-access-8sqgk") pod "5a73e307-e4ba-4102-b4d6-33897be89646" (UID: "5a73e307-e4ba-4102-b4d6-33897be89646"). InnerVolumeSpecName "kube-api-access-8sqgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.607321 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz" (OuterVolumeSpecName: "kube-api-access-8tjvz") pod "0c679d2d-1e39-47a5-b4cf-dba3430a25d9" (UID: "0c679d2d-1e39-47a5-b4cf-dba3430a25d9"). InnerVolumeSpecName "kube-api-access-8tjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.609652 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc" (OuterVolumeSpecName: "kube-api-access-jmfrc") pod "47a951e5-a6d1-4a1c-88ba-ed578c547d55" (UID: "47a951e5-a6d1-4a1c-88ba-ed578c547d55"). InnerVolumeSpecName "kube-api-access-jmfrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.702490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28p5j\" (UniqueName: \"kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j\") pod \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.702636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts\") pod \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\" (UID: \"0263a6bb-e3ac-4eff-9021-c82a555ae52b\") " Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703051 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tjvz\" (UniqueName: \"kubernetes.io/projected/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-kube-api-access-8tjvz\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703068 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmfrc\" (UniqueName: \"kubernetes.io/projected/47a951e5-a6d1-4a1c-88ba-ed578c547d55-kube-api-access-jmfrc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703079 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pphlc\" (UniqueName: \"kubernetes.io/projected/ab7abd39-848f-41f5-9064-6219922e9684-kube-api-access-pphlc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703089 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a73e307-e4ba-4102-b4d6-33897be89646-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703098 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bdc2944-fc75-4309-a83f-3a3087099231-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703107 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c679d2d-1e39-47a5-b4cf-dba3430a25d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703115 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7abd39-848f-41f5-9064-6219922e9684-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703124 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sqgk\" (UniqueName: \"kubernetes.io/projected/5a73e307-e4ba-4102-b4d6-33897be89646-kube-api-access-8sqgk\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703132 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a951e5-a6d1-4a1c-88ba-ed578c547d55-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703140 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9nmk\" (UniqueName: \"kubernetes.io/projected/8bdc2944-fc75-4309-a83f-3a3087099231-kube-api-access-s9nmk\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.703448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0263a6bb-e3ac-4eff-9021-c82a555ae52b" (UID: "0263a6bb-e3ac-4eff-9021-c82a555ae52b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.708742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j" (OuterVolumeSpecName: "kube-api-access-28p5j") pod "0263a6bb-e3ac-4eff-9021-c82a555ae52b" (UID: "0263a6bb-e3ac-4eff-9021-c82a555ae52b"). InnerVolumeSpecName "kube-api-access-28p5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.778704 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ld855" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.778739 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ld855" event={"ID":"47a951e5-a6d1-4a1c-88ba-ed578c547d55","Type":"ContainerDied","Data":"727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.779995 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="727117c7df57c81612b0b25d9044216cede8751730ee4a482c27a91250024cb0" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.782377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x7vlw" event={"ID":"0263a6bb-e3ac-4eff-9021-c82a555ae52b","Type":"ContainerDied","Data":"93f05cde351fce14f854f4912ad55fea30696dd15f9ec1af8fd39ee9b4ee1bf4"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.782410 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f05cde351fce14f854f4912ad55fea30696dd15f9ec1af8fd39ee9b4ee1bf4" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.782460 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x7vlw" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.791358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" event={"ID":"5a73e307-e4ba-4102-b4d6-33897be89646","Type":"ContainerDied","Data":"a9d79759761ae75ec5918c2edeec0a64bb4dde8fb6222c7f3d1a84fcfb419028"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.791397 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9d79759761ae75ec5918c2edeec0a64bb4dde8fb6222c7f3d1a84fcfb419028" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.791457 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c3ab-account-create-update-bscnd" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.797629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-638c-account-create-update-c76x8" event={"ID":"0c679d2d-1e39-47a5-b4cf-dba3430a25d9","Type":"ContainerDied","Data":"affd815c228edc673ecbd0214d852cc45516465c8cdb94a8794b94dd3464eec7"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.797996 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affd815c228edc673ecbd0214d852cc45516465c8cdb94a8794b94dd3464eec7" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.797651 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-638c-account-create-update-c76x8" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.800851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2437-account-create-update-qbr82" event={"ID":"8bdc2944-fc75-4309-a83f-3a3087099231","Type":"ContainerDied","Data":"c320b403f0dae34163dec620e01229cd5d0c2ce527f4f86096cab25682240af0"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.801004 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c320b403f0dae34163dec620e01229cd5d0c2ce527f4f86096cab25682240af0" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.801157 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2437-account-create-update-qbr82" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.804441 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28p5j\" (UniqueName: \"kubernetes.io/projected/0263a6bb-e3ac-4eff-9021-c82a555ae52b-kube-api-access-28p5j\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.804467 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263a6bb-e3ac-4eff-9021-c82a555ae52b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.807369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4s89j" event={"ID":"ab7abd39-848f-41f5-9064-6219922e9684","Type":"ContainerDied","Data":"ad842ef6395cad6af1eed50080d8078211bf4ebdb18f0f87bece3070f499130d"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.807403 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad842ef6395cad6af1eed50080d8078211bf4ebdb18f0f87bece3070f499130d" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.807456 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4s89j" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.814491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad503a03-3257-45a6-b1c0-d83794238d40","Type":"ContainerDied","Data":"92d1a9efac2b3954ee9f3afb6f227e14c97e4d7c34e21af19c9a5899bd8817d8"} Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.814552 4869 scope.go:117] "RemoveContainer" containerID="da2130f36d90e55d25585cb8b65fa57e5560e52ad7c3025f41736a9e84d96cba" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.814552 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.868397 4869 scope.go:117] "RemoveContainer" containerID="c3fc84d0c02d7cd743d56c7f0309bdbcf76b5fafc610ceb51c06b93a7d956cf5" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.920295 4869 scope.go:117] "RemoveContainer" containerID="9923aa22b54d458de5c94fb65fafeead5d7e83be2f25052935877737c5b05974" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.931581 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.931840 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-log" containerID="cri-o://a11bb73fa16a10f061f68c1e9077ddf948b7cf8de5fc0a18e9d6e9f6f7331ecc" gracePeriod=30 Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.931994 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-httpd" containerID="cri-o://249f30a57af1b6c0e59d4fc2ba9b67a2dde6dc1cc879dc1600cfef5de142278f" gracePeriod=30 Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.946761 4869 scope.go:117] "RemoveContainer" containerID="0a6937cb71ff7f7b36f7de491f4a4976cc497f5b106efa59ed8c100a934513e9" Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.958135 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:07 crc kubenswrapper[4869]: I0314 09:21:07.979358 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001000 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001391 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdc2944-fc75-4309-a83f-3a3087099231" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001412 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdc2944-fc75-4309-a83f-3a3087099231" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001428 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c679d2d-1e39-47a5-b4cf-dba3430a25d9" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c679d2d-1e39-47a5-b4cf-dba3430a25d9" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001450 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7abd39-848f-41f5-9064-6219922e9684" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7abd39-848f-41f5-9064-6219922e9684" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001467 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a73e307-e4ba-4102-b4d6-33897be89646" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001472 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a73e307-e4ba-4102-b4d6-33897be89646" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001482 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0263a6bb-e3ac-4eff-9021-c82a555ae52b" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001488 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0263a6bb-e3ac-4eff-9021-c82a555ae52b" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001501 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-central-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-central-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001538 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-notification-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001544 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-notification-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001553 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="proxy-httpd" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="proxy-httpd" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001577 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a951e5-a6d1-4a1c-88ba-ed578c547d55" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001583 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a951e5-a6d1-4a1c-88ba-ed578c547d55" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: E0314 09:21:08.001593 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="sg-core" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001600 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="sg-core" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001784 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0263a6bb-e3ac-4eff-9021-c82a555ae52b" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001795 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7abd39-848f-41f5-9064-6219922e9684" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001806 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdc2944-fc75-4309-a83f-3a3087099231" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001817 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="proxy-httpd" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001827 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a951e5-a6d1-4a1c-88ba-ed578c547d55" containerName="mariadb-database-create" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001839 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a73e307-e4ba-4102-b4d6-33897be89646" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001851 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-central-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001860 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="sg-core" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001868 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c679d2d-1e39-47a5-b4cf-dba3430a25d9" containerName="mariadb-account-create-update" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.001880 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" containerName="ceilometer-notification-agent" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.003802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.011302 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.011724 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.012031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.024690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.126938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.126976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.126997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.127093 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.127111 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.127154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.127198 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.127225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44llb\" (UniqueName: \"kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228692 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44llb\" (UniqueName: \"kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.228875 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.229396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.230607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.234279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.235133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.237141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.237673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.238232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.251158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44llb\" (UniqueName: \"kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb\") pod \"ceilometer-0\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.335054 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.701152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.850517 4869 generic.go:334] "Generic (PLEG): container finished" podID="f1c5363e-e811-4795-9b80-7f4be678b705" containerID="a11bb73fa16a10f061f68c1e9077ddf948b7cf8de5fc0a18e9d6e9f6f7331ecc" exitCode=143 Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.850560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerDied","Data":"a11bb73fa16a10f061f68c1e9077ddf948b7cf8de5fc0a18e9d6e9f6f7331ecc"} Mar 14 09:21:08 crc kubenswrapper[4869]: I0314 09:21:08.950570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.398708 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.399199 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-log" containerID="cri-o://401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da" gracePeriod=30 Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.401062 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-httpd" containerID="cri-o://4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca" gracePeriod=30 Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.605092 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.605422 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.605466 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.606181 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.606238 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d" gracePeriod=600 Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.719308 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad503a03-3257-45a6-b1c0-d83794238d40" path="/var/lib/kubelet/pods/ad503a03-3257-45a6-b1c0-d83794238d40/volumes" Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.921349 4869 generic.go:334] "Generic (PLEG): container finished" podID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerID="401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da" exitCode=143 Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.921788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerDied","Data":"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da"} Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.949971 4869 generic.go:334] "Generic (PLEG): container finished" podID="f1c5363e-e811-4795-9b80-7f4be678b705" containerID="249f30a57af1b6c0e59d4fc2ba9b67a2dde6dc1cc879dc1600cfef5de142278f" exitCode=0 Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.950053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerDied","Data":"249f30a57af1b6c0e59d4fc2ba9b67a2dde6dc1cc879dc1600cfef5de142278f"} Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.997322 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerStarted","Data":"71d4085a42091e60885b37e606e94cb0986503fe546dee42392fc068ac02956f"} Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.997378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerStarted","Data":"01fdc2e4d53d3ea86f6be9c6b01ed3f82a43da88c1065602f934ac814ab35910"} Mar 14 09:21:09 crc kubenswrapper[4869]: I0314 09:21:09.997390 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerStarted","Data":"6db2b74648aab7d9a0c304c1f24b2a61f155a6fe33f3789d95555ba12a6fb437"} Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.005770 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d" exitCode=0 Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.005827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d"} Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.005862 4869 scope.go:117] "RemoveContainer" containerID="56010979bbae19d804da289e0aa16d793e02c78a300551a90489925126f6f41f" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.110337 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190405 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190470 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7fz\" (UniqueName: \"kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190711 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190772 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.190820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts\") pod \"f1c5363e-e811-4795-9b80-7f4be678b705\" (UID: \"f1c5363e-e811-4795-9b80-7f4be678b705\") " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.192590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.193616 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs" (OuterVolumeSpecName: "logs") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.199451 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts" (OuterVolumeSpecName: "scripts") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.199912 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz" (OuterVolumeSpecName: "kube-api-access-hq7fz") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "kube-api-access-hq7fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.207947 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.230756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.242088 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.282357 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.291357 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296065 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7fz\" (UniqueName: \"kubernetes.io/projected/f1c5363e-e811-4795-9b80-7f4be678b705-kube-api-access-hq7fz\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296095 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296115 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296126 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296137 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296144 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1c5363e-e811-4795-9b80-7f4be678b705-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.296152 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.318645 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data" (OuterVolumeSpecName: "config-data") pod "f1c5363e-e811-4795-9b80-7f4be678b705" (UID: "f1c5363e-e811-4795-9b80-7f4be678b705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.351193 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.398152 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.398187 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1c5363e-e811-4795-9b80-7f4be678b705-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.790922 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75cd657fd5-hrb28" Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.873044 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.895019 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.895282 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-786bc4c684-kzltd" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-api" containerID="cri-o://6e7fb6d3815dace322d2364536be9e45dc321e6f3f0e4bca136f6c8a344cbcb1" gracePeriod=30 Mar 14 09:21:10 crc kubenswrapper[4869]: I0314 09:21:10.895439 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-786bc4c684-kzltd" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-httpd" containerID="cri-o://6dcc1369b777b98fbbaf29434849b469f22f1300adbffe8d8da52febf4d4592a" gracePeriod=30 Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.026320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311"} Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.040531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1c5363e-e811-4795-9b80-7f4be678b705","Type":"ContainerDied","Data":"1d46a021f6952a9aa5e81920d76059f069961778a26d5a3e9cd4cdfcac9ec8bb"} Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.040588 4869 scope.go:117] "RemoveContainer" containerID="249f30a57af1b6c0e59d4fc2ba9b67a2dde6dc1cc879dc1600cfef5de142278f" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.040756 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.057630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerStarted","Data":"ec18e63af676c5446445a3964054f690284ebcb961e16a4a33cac090e88e668b"} Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.057699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.106476 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.133643 4869 scope.go:117] "RemoveContainer" containerID="a11bb73fa16a10f061f68c1e9077ddf948b7cf8de5fc0a18e9d6e9f6f7331ecc" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.230201 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.257291 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.280734 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:11 crc kubenswrapper[4869]: E0314 09:21:11.281179 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-log" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.281192 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-log" Mar 14 09:21:11 crc kubenswrapper[4869]: E0314 09:21:11.281214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-httpd" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.281219 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-httpd" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.281402 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-log" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.281422 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" containerName="glance-httpd" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.297272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.301082 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.301258 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.316925 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317193 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-logs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48jq\" (UniqueName: \"kubernetes.io/projected/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-kube-api-access-l48jq\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.317819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.368566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420125 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-logs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48jq\" (UniqueName: \"kubernetes.io/projected/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-kube-api-access-l48jq\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.420357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.421753 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-logs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.421772 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.422007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.429854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.433166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.437150 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.438880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48jq\" (UniqueName: \"kubernetes.io/projected/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-kube-api-access-l48jq\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.439213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ece40c5-10b0-4c1e-8985-99ccf56b5cfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.508192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb\") " pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.631998 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.706206 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:21:11 crc kubenswrapper[4869]: I0314 09:21:11.795700 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c5363e-e811-4795-9b80-7f4be678b705" path="/var/lib/kubelet/pods/f1c5363e-e811-4795-9b80-7f4be678b705/volumes" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.000531 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055524 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncz6r\" (UniqueName: \"kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055550 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055616 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.055855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data\") pod \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\" (UID: \"c4ef0fc1-f98b-4e00-8066-9084f1631bff\") " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.059728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs" (OuterVolumeSpecName: "logs") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.060281 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.084053 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r" (OuterVolumeSpecName: "kube-api-access-ncz6r") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "kube-api-access-ncz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.087746 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.090629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts" (OuterVolumeSpecName: "scripts") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.100632 4869 generic.go:334] "Generic (PLEG): container finished" podID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerID="4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca" exitCode=0 Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.100715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerDied","Data":"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca"} Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.100741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4ef0fc1-f98b-4e00-8066-9084f1631bff","Type":"ContainerDied","Data":"1788681cda8f1ccc04000f48a6e14193f8103712628ba1bc048df5ade17ce0b4"} Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.100782 4869 scope.go:117] "RemoveContainer" containerID="4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.100986 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.144756 4869 generic.go:334] "Generic (PLEG): container finished" podID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerID="6dcc1369b777b98fbbaf29434849b469f22f1300adbffe8d8da52febf4d4592a" exitCode=0 Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.145173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerDied","Data":"6dcc1369b777b98fbbaf29434849b469f22f1300adbffe8d8da52febf4d4592a"} Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.155267 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.160655 4869 scope.go:117] "RemoveContainer" containerID="401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161053 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161079 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161092 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161100 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncz6r\" (UniqueName: \"kubernetes.io/projected/c4ef0fc1-f98b-4e00-8066-9084f1631bff-kube-api-access-ncz6r\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161121 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.161131 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ef0fc1-f98b-4e00-8066-9084f1631bff-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.175551 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"35c6d1fd-be8f-4390-9199-bf573760717b","Type":"ContainerStarted","Data":"8528c4053b11ba42e86bda04d957113d4252839ee90a451cad52dc588cb46ae1"} Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.182151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data" (OuterVolumeSpecName: "config-data") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.209162 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.835322872 podStartE2EDuration="35.209144257s" podCreationTimestamp="2026-03-14 09:20:37 +0000 UTC" firstStartedPulling="2026-03-14 09:20:38.388138403 +0000 UTC m=+1391.360420456" lastFinishedPulling="2026-03-14 09:21:10.761959788 +0000 UTC m=+1423.734241841" observedRunningTime="2026-03-14 09:21:12.200786631 +0000 UTC m=+1425.173068684" watchObservedRunningTime="2026-03-14 09:21:12.209144257 +0000 UTC m=+1425.181426310" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.219801 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4ef0fc1-f98b-4e00-8066-9084f1631bff" (UID: "c4ef0fc1-f98b-4e00-8066-9084f1631bff"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.222184 4869 scope.go:117] "RemoveContainer" containerID="4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca" Mar 14 09:21:12 crc kubenswrapper[4869]: E0314 09:21:12.227201 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca\": container with ID starting with 4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca not found: ID does not exist" containerID="4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.227255 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca"} err="failed to get container status \"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca\": rpc error: code = NotFound desc = could not find container \"4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca\": container with ID starting with 4bddd13700c4d9bc0e62ce4fb060c36e85905d7fddb2921be7bb4beb1a788bca not found: ID does not exist" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.227284 4869 scope.go:117] "RemoveContainer" containerID="401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da" Mar 14 09:21:12 crc kubenswrapper[4869]: E0314 09:21:12.227727 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da\": container with ID starting with 401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da not found: ID does not exist" containerID="401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.227755 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da"} err="failed to get container status \"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da\": rpc error: code = NotFound desc = could not find container \"401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da\": container with ID starting with 401df93732272162477819a7ee7865a18e571520a9def13b3d933381eaf6e0da not found: ID does not exist" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.234814 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.263249 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.263285 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.263297 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4ef0fc1-f98b-4e00-8066-9084f1631bff-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.412703 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 14 09:21:12 crc kubenswrapper[4869]: W0314 09:21:12.421637 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ece40c5_10b0_4c1e_8985_99ccf56b5cfb.slice/crio-d1f1fb36b1319088a5c2376c8c93f39c3db2387fd24f3ae880167019f186349d WatchSource:0}: Error finding container d1f1fb36b1319088a5c2376c8c93f39c3db2387fd24f3ae880167019f186349d: Status 404 returned error can't find the container with id d1f1fb36b1319088a5c2376c8c93f39c3db2387fd24f3ae880167019f186349d Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.447697 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.459170 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.486436 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:12 crc kubenswrapper[4869]: E0314 09:21:12.486924 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-httpd" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.486946 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-httpd" Mar 14 09:21:12 crc kubenswrapper[4869]: E0314 09:21:12.486984 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-log" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.486993 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-log" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.487200 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-httpd" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.487224 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" containerName="glance-log" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.488252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.492595 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.492793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.495727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575122 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-logs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575177 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwkl\" (UniqueName: \"kubernetes.io/projected/5facda51-8081-455a-93ee-ca02ca6e6e55-kube-api-access-zdwkl\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.575392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.676937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-logs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdwkl\" (UniqueName: \"kubernetes.io/projected/5facda51-8081-455a-93ee-ca02ca6e6e55-kube-api-access-zdwkl\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677084 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677149 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.677402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.678138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5facda51-8081-455a-93ee-ca02ca6e6e55-logs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.678243 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.682074 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.682410 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.682899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.683793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5facda51-8081-455a-93ee-ca02ca6e6e55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.702224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdwkl\" (UniqueName: \"kubernetes.io/projected/5facda51-8081-455a-93ee-ca02ca6e6e55-kube-api-access-zdwkl\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.720231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5facda51-8081-455a-93ee-ca02ca6e6e55\") " pod="openstack/glance-default-internal-api-0" Mar 14 09:21:12 crc kubenswrapper[4869]: I0314 09:21:12.818108 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.194464 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565"} Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.197826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb","Type":"ContainerStarted","Data":"d1f1fb36b1319088a5c2376c8c93f39c3db2387fd24f3ae880167019f186349d"} Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerStarted","Data":"46d1daee0082d7f3393e51f4a17ecdccb2e127f12e78ce264791d52d3b305f1a"} Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241501 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-central-agent" containerID="cri-o://01fdc2e4d53d3ea86f6be9c6b01ed3f82a43da88c1065602f934ac814ab35910" gracePeriod=30 Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241596 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="sg-core" containerID="cri-o://ec18e63af676c5446445a3964054f690284ebcb961e16a4a33cac090e88e668b" gracePeriod=30 Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241661 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-notification-agent" containerID="cri-o://71d4085a42091e60885b37e606e94cb0986503fe546dee42392fc068ac02956f" gracePeriod=30 Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241709 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="proxy-httpd" containerID="cri-o://46d1daee0082d7f3393e51f4a17ecdccb2e127f12e78ce264791d52d3b305f1a" gracePeriod=30 Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.241535 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.277694 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.945291946 podStartE2EDuration="6.277669093s" podCreationTimestamp="2026-03-14 09:21:07 +0000 UTC" firstStartedPulling="2026-03-14 09:21:08.959907022 +0000 UTC m=+1421.932189075" lastFinishedPulling="2026-03-14 09:21:12.292284169 +0000 UTC m=+1425.264566222" observedRunningTime="2026-03-14 09:21:13.272523557 +0000 UTC m=+1426.244805630" watchObservedRunningTime="2026-03-14 09:21:13.277669093 +0000 UTC m=+1426.249951156" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.501494 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ft85m"] Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.503425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.505829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.506151 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hdhz" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.506297 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.531023 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ft85m"] Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.559224 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.617024 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tcb\" (UniqueName: \"kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.617200 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.617235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.617295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.704055 4869 scope.go:117] "RemoveContainer" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.718738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.718793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.718849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.718890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25tcb\" (UniqueName: \"kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.727100 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.729213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.736977 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.742197 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ef0fc1-f98b-4e00-8066-9084f1631bff" path="/var/lib/kubelet/pods/c4ef0fc1-f98b-4e00-8066-9084f1631bff/volumes" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.746781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25tcb\" (UniqueName: \"kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb\") pod \"nova-cell0-conductor-db-sync-ft85m\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:13 crc kubenswrapper[4869]: I0314 09:21:13.831089 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.253858 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.257208 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5facda51-8081-455a-93ee-ca02ca6e6e55","Type":"ContainerStarted","Data":"4a9350301779d03acc65cf2e94a1ea863e4637e5d8b1b246cc94840efb73b540"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.260242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb","Type":"ContainerStarted","Data":"02a8a47de5c20aeb09c843230a8a70829a3a0079bc4deea42b3b982045fbf8c7"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.260295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1ece40c5-10b0-4c1e-8985-99ccf56b5cfb","Type":"ContainerStarted","Data":"7ee5607c6ed1ac33d75145c70c4964864d05f9713cfe7eef84e23bd1b15683d4"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264248 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerID="46d1daee0082d7f3393e51f4a17ecdccb2e127f12e78ce264791d52d3b305f1a" exitCode=0 Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264273 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerID="ec18e63af676c5446445a3964054f690284ebcb961e16a4a33cac090e88e668b" exitCode=2 Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264281 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerID="71d4085a42091e60885b37e606e94cb0986503fe546dee42392fc068ac02956f" exitCode=0 Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264296 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerDied","Data":"46d1daee0082d7f3393e51f4a17ecdccb2e127f12e78ce264791d52d3b305f1a"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerDied","Data":"ec18e63af676c5446445a3964054f690284ebcb961e16a4a33cac090e88e668b"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.264337 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerDied","Data":"71d4085a42091e60885b37e606e94cb0986503fe546dee42392fc068ac02956f"} Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.406318 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.406411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.505296 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.505274495 podStartE2EDuration="3.505274495s" podCreationTimestamp="2026-03-14 09:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:14.304116341 +0000 UTC m=+1427.276398394" watchObservedRunningTime="2026-03-14 09:21:14.505274495 +0000 UTC m=+1427.477556538" Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.514238 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ft85m"] Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.547593 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:21:14 crc kubenswrapper[4869]: I0314 09:21:14.547746 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.299625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ft85m" event={"ID":"fa348286-4ed9-4e11-8b48-6999c63429f6","Type":"ContainerStarted","Data":"8c6f5a495829c711329ada1fc941fbd99806a459b43e5760d5280dee8521f576"} Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.303041 4869 generic.go:334] "Generic (PLEG): container finished" podID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerID="6e7fb6d3815dace322d2364536be9e45dc321e6f3f0e4bca136f6c8a344cbcb1" exitCode=0 Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.303089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerDied","Data":"6e7fb6d3815dace322d2364536be9e45dc321e6f3f0e4bca136f6c8a344cbcb1"} Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.304791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5facda51-8081-455a-93ee-ca02ca6e6e55","Type":"ContainerStarted","Data":"137f7aaec218474614dc8134c4ae841f8b2655e4fd562ae7aeb13206a991add3"} Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.304870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5facda51-8081-455a-93ee-ca02ca6e6e55","Type":"ContainerStarted","Data":"f939af1459660abd9a7f200445a28e06886520b3bf610e220a079bb0de0ade8e"} Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.607150 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.636020 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.635998336 podStartE2EDuration="3.635998336s" podCreationTimestamp="2026-03-14 09:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:15.341970911 +0000 UTC m=+1428.314252964" watchObservedRunningTime="2026-03-14 09:21:15.635998336 +0000 UTC m=+1428.608280389" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.772379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config\") pod \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.772925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j268f\" (UniqueName: \"kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f\") pod \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.773183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config\") pod \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.773262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle\") pod \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.773309 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs\") pod \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\" (UID: \"443191c7-2ebd-4ac2-a36e-d6c36958dba6\") " Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.798742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "443191c7-2ebd-4ac2-a36e-d6c36958dba6" (UID: "443191c7-2ebd-4ac2-a36e-d6c36958dba6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.799769 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f" (OuterVolumeSpecName: "kube-api-access-j268f") pod "443191c7-2ebd-4ac2-a36e-d6c36958dba6" (UID: "443191c7-2ebd-4ac2-a36e-d6c36958dba6"). InnerVolumeSpecName "kube-api-access-j268f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.841064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config" (OuterVolumeSpecName: "config") pod "443191c7-2ebd-4ac2-a36e-d6c36958dba6" (UID: "443191c7-2ebd-4ac2-a36e-d6c36958dba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.856592 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "443191c7-2ebd-4ac2-a36e-d6c36958dba6" (UID: "443191c7-2ebd-4ac2-a36e-d6c36958dba6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.876733 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j268f\" (UniqueName: \"kubernetes.io/projected/443191c7-2ebd-4ac2-a36e-d6c36958dba6-kube-api-access-j268f\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.876764 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.876774 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.876791 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.901060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "443191c7-2ebd-4ac2-a36e-d6c36958dba6" (UID: "443191c7-2ebd-4ac2-a36e-d6c36958dba6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:15 crc kubenswrapper[4869]: I0314 09:21:15.979075 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/443191c7-2ebd-4ac2-a36e-d6c36958dba6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.325062 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-786bc4c684-kzltd" Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.325127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-786bc4c684-kzltd" event={"ID":"443191c7-2ebd-4ac2-a36e-d6c36958dba6","Type":"ContainerDied","Data":"9bc3bb3f9e68f2f12cc5a3b0ea93a883dbfcc14eeb78ff563edafd07324bedac"} Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.325172 4869 scope.go:117] "RemoveContainer" containerID="6dcc1369b777b98fbbaf29434849b469f22f1300adbffe8d8da52febf4d4592a" Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.367426 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.375227 4869 scope.go:117] "RemoveContainer" containerID="6e7fb6d3815dace322d2364536be9e45dc321e6f3f0e4bca136f6c8a344cbcb1" Mar 14 09:21:16 crc kubenswrapper[4869]: I0314 09:21:16.388693 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-786bc4c684-kzltd"] Mar 14 09:21:17 crc kubenswrapper[4869]: I0314 09:21:17.310225 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:17 crc kubenswrapper[4869]: I0314 09:21:17.313644 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" containerID="cri-o://bd52d27675f2545cb3dfd99c38d779150fce0a328987383bcb00e27d75c18dfe" gracePeriod=30 Mar 14 09:21:17 crc kubenswrapper[4869]: I0314 09:21:17.724317 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" path="/var/lib/kubelet/pods/443191c7-2ebd-4ac2-a36e-d6c36958dba6/volumes" Mar 14 09:21:19 crc kubenswrapper[4869]: I0314 09:21:19.395680 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e0825fa-2453-46a0-b677-79808694bba8" containerID="bd52d27675f2545cb3dfd99c38d779150fce0a328987383bcb00e27d75c18dfe" exitCode=0 Mar 14 09:21:19 crc kubenswrapper[4869]: I0314 09:21:19.395773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerDied","Data":"bd52d27675f2545cb3dfd99c38d779150fce0a328987383bcb00e27d75c18dfe"} Mar 14 09:21:19 crc kubenswrapper[4869]: I0314 09:21:19.396119 4869 scope.go:117] "RemoveContainer" containerID="264ef7f4e026e62ce835a2130ebc791b79e9b024a52717b10e4d89942e45633f" Mar 14 09:21:19 crc kubenswrapper[4869]: I0314 09:21:19.417247 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerID="01fdc2e4d53d3ea86f6be9c6b01ed3f82a43da88c1065602f934ac814ab35910" exitCode=0 Mar 14 09:21:19 crc kubenswrapper[4869]: I0314 09:21:19.417452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerDied","Data":"01fdc2e4d53d3ea86f6be9c6b01ed3f82a43da88c1065602f934ac814ab35910"} Mar 14 09:21:21 crc kubenswrapper[4869]: I0314 09:21:21.632477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:21:21 crc kubenswrapper[4869]: I0314 09:21:21.632810 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 14 09:21:21 crc kubenswrapper[4869]: I0314 09:21:21.678272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:21:21 crc kubenswrapper[4869]: I0314 09:21:21.687344 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.450722 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.451323 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.818336 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.818404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.859323 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:22 crc kubenswrapper[4869]: I0314 09:21:22.878485 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.462344 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" exitCode=1 Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.462423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976"} Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.463356 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:21:23 crc kubenswrapper[4869]: E0314 09:21:23.463732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.468368 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" exitCode=1 Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.468466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565"} Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.468828 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.468903 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:23 crc kubenswrapper[4869]: I0314 09:21:23.469889 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:21:23 crc kubenswrapper[4869]: E0314 09:21:23.470190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.404690 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.405010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.477285 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:21:24 crc kubenswrapper[4869]: E0314 09:21:24.477538 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.538821 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.538874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.539724 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:21:24 crc kubenswrapper[4869]: E0314 09:21:24.539949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.578320 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.578426 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:21:24 crc kubenswrapper[4869]: I0314 09:21:24.586995 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 14 09:21:25 crc kubenswrapper[4869]: I0314 09:21:25.696455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:25 crc kubenswrapper[4869]: I0314 09:21:25.696873 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 09:21:25 crc kubenswrapper[4869]: I0314 09:21:25.698405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 14 09:21:26 crc kubenswrapper[4869]: I0314 09:21:26.714120 4869 scope.go:117] "RemoveContainer" containerID="aad3ebc17078f03f839b6596f0e0c9602b1b0d55731a40e54c6502807b95455b" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.080219 4869 scope.go:117] "RemoveContainer" containerID="9d73f7712585932f964f047e324821c397ba92b8108032a413dc8cbc1ba74f9f" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.100450 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.199560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle\") pod \"0e0825fa-2453-46a0-b677-79808694bba8\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.199647 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs\") pod \"0e0825fa-2453-46a0-b677-79808694bba8\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.199708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data\") pod \"0e0825fa-2453-46a0-b677-79808694bba8\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.199738 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca\") pod \"0e0825fa-2453-46a0-b677-79808694bba8\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.199876 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5jn9\" (UniqueName: \"kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9\") pod \"0e0825fa-2453-46a0-b677-79808694bba8\" (UID: \"0e0825fa-2453-46a0-b677-79808694bba8\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.200662 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs" (OuterVolumeSpecName: "logs") pod "0e0825fa-2453-46a0-b677-79808694bba8" (UID: "0e0825fa-2453-46a0-b677-79808694bba8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.203298 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.210833 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9" (OuterVolumeSpecName: "kube-api-access-s5jn9") pod "0e0825fa-2453-46a0-b677-79808694bba8" (UID: "0e0825fa-2453-46a0-b677-79808694bba8"). InnerVolumeSpecName "kube-api-access-s5jn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.245638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e0825fa-2453-46a0-b677-79808694bba8" (UID: "0e0825fa-2453-46a0-b677-79808694bba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.253739 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0e0825fa-2453-46a0-b677-79808694bba8" (UID: "0e0825fa-2453-46a0-b677-79808694bba8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.289227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data" (OuterVolumeSpecName: "config-data") pod "0e0825fa-2453-46a0-b677-79808694bba8" (UID: "0e0825fa-2453-46a0-b677-79808694bba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304160 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304245 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304394 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44llb\" (UniqueName: \"kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304583 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304624 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd\") pod \"2d5af6be-c5e2-4983-8e41-f7046e785500\" (UID: \"2d5af6be-c5e2-4983-8e41-f7046e785500\") " Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.304655 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305204 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305222 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e0825fa-2453-46a0-b677-79808694bba8-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305234 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305244 4869 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e0825fa-2453-46a0-b677-79808694bba8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305254 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305267 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5jn9\" (UniqueName: \"kubernetes.io/projected/0e0825fa-2453-46a0-b677-79808694bba8-kube-api-access-s5jn9\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.305644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.309167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb" (OuterVolumeSpecName: "kube-api-access-44llb") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "kube-api-access-44llb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.309639 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts" (OuterVolumeSpecName: "scripts") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.351007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.365429 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.398044 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409345 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409376 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409386 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409396 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44llb\" (UniqueName: \"kubernetes.io/projected/2d5af6be-c5e2-4983-8e41-f7046e785500-kube-api-access-44llb\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409405 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.409413 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5af6be-c5e2-4983-8e41-f7046e785500-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.433144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data" (OuterVolumeSpecName: "config-data") pod "2d5af6be-c5e2-4983-8e41-f7046e785500" (UID: "2d5af6be-c5e2-4983-8e41-f7046e785500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.508251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ft85m" event={"ID":"fa348286-4ed9-4e11-8b48-6999c63429f6","Type":"ContainerStarted","Data":"b9854c741b5d7d00c6db01564efaac34ca0749b10b3c8156c7c588e5b187cda7"} Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.511110 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5af6be-c5e2-4983-8e41-f7046e785500-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.516096 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.516122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5af6be-c5e2-4983-8e41-f7046e785500","Type":"ContainerDied","Data":"6db2b74648aab7d9a0c304c1f24b2a61f155a6fe33f3789d95555ba12a6fb437"} Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.516179 4869 scope.go:117] "RemoveContainer" containerID="46d1daee0082d7f3393e51f4a17ecdccb2e127f12e78ce264791d52d3b305f1a" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.524254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e0825fa-2453-46a0-b677-79808694bba8","Type":"ContainerDied","Data":"df6f23c2660847a0fd67d395c3708f5b0327b6f40676e35c662a3ee9d4cb0ee7"} Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.524331 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.531512 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ft85m" podStartSLOduration=2.269648366 podStartE2EDuration="14.531490622s" podCreationTimestamp="2026-03-14 09:21:13 +0000 UTC" firstStartedPulling="2026-03-14 09:21:14.51925421 +0000 UTC m=+1427.491536263" lastFinishedPulling="2026-03-14 09:21:26.781096446 +0000 UTC m=+1439.753378519" observedRunningTime="2026-03-14 09:21:27.522975242 +0000 UTC m=+1440.495257295" watchObservedRunningTime="2026-03-14 09:21:27.531490622 +0000 UTC m=+1440.503772675" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.555964 4869 scope.go:117] "RemoveContainer" containerID="ec18e63af676c5446445a3964054f690284ebcb961e16a4a33cac090e88e668b" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.559306 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.601193 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.615028 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.645933 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657236 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657837 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="sg-core" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657857 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="sg-core" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657874 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657881 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657898 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657909 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657929 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-central-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657936 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-central-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657946 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657953 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657963 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-api" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657969 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-api" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.657988 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-notification-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.657994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-notification-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.658015 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="proxy-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658021 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="proxy-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658227 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-central-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658245 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658261 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-api" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658275 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658286 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="sg-core" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658300 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658312 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="ceilometer-notification-agent" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658326 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" containerName="proxy-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658341 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="443191c7-2ebd-4ac2-a36e-d6c36958dba6" containerName="neutron-httpd" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.658719 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: E0314 09:21:27.658755 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658763 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.658974 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0825fa-2453-46a0-b677-79808694bba8" containerName="watcher-decision-engine" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.662092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.667277 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.668579 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.667469 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.678069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.680000 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.681726 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.695111 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.718540 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0825fa-2453-46a0-b677-79808694bba8" path="/var/lib/kubelet/pods/0e0825fa-2453-46a0-b677-79808694bba8/volumes" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.719150 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5af6be-c5e2-4983-8e41-f7046e785500" path="/var/lib/kubelet/pods/2d5af6be-c5e2-4983-8e41-f7046e785500/volumes" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.719867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.720764 4869 scope.go:117] "RemoveContainer" containerID="71d4085a42091e60885b37e606e94cb0986503fe546dee42392fc068ac02956f" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.749308 4869 scope.go:117] "RemoveContainer" containerID="01fdc2e4d53d3ea86f6be9c6b01ed3f82a43da88c1065602f934ac814ab35910" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.808718 4869 scope.go:117] "RemoveContainer" containerID="bd52d27675f2545cb3dfd99c38d779150fce0a328987383bcb00e27d75c18dfe" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819910 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh47c\" (UniqueName: \"kubernetes.io/projected/0795b1cf-4f11-46ad-b29c-7af7c9016c01-kube-api-access-mh47c\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.819979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfskv\" (UniqueName: \"kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0795b1cf-4f11-46ad-b29c-7af7c9016c01-logs\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820064 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.820227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.922496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.922827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923383 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh47c\" (UniqueName: \"kubernetes.io/projected/0795b1cf-4f11-46ad-b29c-7af7c9016c01-kube-api-access-mh47c\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923645 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfskv\" (UniqueName: \"kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923732 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.923760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0795b1cf-4f11-46ad-b29c-7af7c9016c01-logs\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.924973 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.927240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0795b1cf-4f11-46ad-b29c-7af7c9016c01-logs\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.940637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.940942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.941101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.941432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.941702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.942302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.947365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.957055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0795b1cf-4f11-46ad-b29c-7af7c9016c01-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.959941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh47c\" (UniqueName: \"kubernetes.io/projected/0795b1cf-4f11-46ad-b29c-7af7c9016c01-kube-api-access-mh47c\") pod \"watcher-decision-engine-0\" (UID: \"0795b1cf-4f11-46ad-b29c-7af7c9016c01\") " pod="openstack/watcher-decision-engine-0" Mar 14 09:21:27 crc kubenswrapper[4869]: I0314 09:21:27.966355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfskv\" (UniqueName: \"kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv\") pod \"ceilometer-0\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " pod="openstack/ceilometer-0" Mar 14 09:21:28 crc kubenswrapper[4869]: I0314 09:21:28.022324 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:28 crc kubenswrapper[4869]: I0314 09:21:28.039730 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:28 crc kubenswrapper[4869]: I0314 09:21:28.543167 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:28 crc kubenswrapper[4869]: W0314 09:21:28.551346 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32809759_cb8c_4895_9ec0_662d6577e350.slice/crio-ee72ab74041d35acb310c790eb6f38e5447f2f736659723fc953e23d430b1f15 WatchSource:0}: Error finding container ee72ab74041d35acb310c790eb6f38e5447f2f736659723fc953e23d430b1f15: Status 404 returned error can't find the container with id ee72ab74041d35acb310c790eb6f38e5447f2f736659723fc953e23d430b1f15 Mar 14 09:21:28 crc kubenswrapper[4869]: I0314 09:21:28.631382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.555680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0795b1cf-4f11-46ad-b29c-7af7c9016c01","Type":"ContainerStarted","Data":"51d01e82fe6573b5c24fafbca1e3f3c39163fee598971ef4c9a35ccb2680b47e"} Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.557541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0795b1cf-4f11-46ad-b29c-7af7c9016c01","Type":"ContainerStarted","Data":"b137b493304268a36ec09b5bbb53b85fa3765cf39ffb9d8891d85672013170d0"} Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.563171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerStarted","Data":"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb"} Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.563446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerStarted","Data":"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830"} Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.563463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerStarted","Data":"ee72ab74041d35acb310c790eb6f38e5447f2f736659723fc953e23d430b1f15"} Mar 14 09:21:29 crc kubenswrapper[4869]: I0314 09:21:29.607433 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.607393575 podStartE2EDuration="2.607393575s" podCreationTimestamp="2026-03-14 09:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:29.598303171 +0000 UTC m=+1442.570585264" watchObservedRunningTime="2026-03-14 09:21:29.607393575 +0000 UTC m=+1442.579675668" Mar 14 09:21:30 crc kubenswrapper[4869]: I0314 09:21:30.581626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerStarted","Data":"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a"} Mar 14 09:21:32 crc kubenswrapper[4869]: I0314 09:21:32.606199 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerStarted","Data":"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523"} Mar 14 09:21:32 crc kubenswrapper[4869]: I0314 09:21:32.607206 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:32 crc kubenswrapper[4869]: I0314 09:21:32.638179 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.371711549 podStartE2EDuration="5.63815716s" podCreationTimestamp="2026-03-14 09:21:27 +0000 UTC" firstStartedPulling="2026-03-14 09:21:28.557493828 +0000 UTC m=+1441.529775881" lastFinishedPulling="2026-03-14 09:21:31.823939439 +0000 UTC m=+1444.796221492" observedRunningTime="2026-03-14 09:21:32.634163572 +0000 UTC m=+1445.606445665" watchObservedRunningTime="2026-03-14 09:21:32.63815716 +0000 UTC m=+1445.610439223" Mar 14 09:21:35 crc kubenswrapper[4869]: I0314 09:21:35.707756 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:21:35 crc kubenswrapper[4869]: E0314 09:21:35.708619 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:21:38 crc kubenswrapper[4869]: I0314 09:21:38.040221 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:38 crc kubenswrapper[4869]: I0314 09:21:38.076093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:38 crc kubenswrapper[4869]: I0314 09:21:38.670557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:38 crc kubenswrapper[4869]: I0314 09:21:38.698658 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Mar 14 09:21:38 crc kubenswrapper[4869]: I0314 09:21:38.704122 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:21:38 crc kubenswrapper[4869]: E0314 09:21:38.704387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.590582 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.591174 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-central-agent" containerID="cri-o://a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830" gracePeriod=30 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.591302 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-notification-agent" containerID="cri-o://28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb" gracePeriod=30 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.591290 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="sg-core" containerID="cri-o://a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a" gracePeriod=30 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.591528 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="proxy-httpd" containerID="cri-o://675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523" gracePeriod=30 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.717306 4869 generic.go:334] "Generic (PLEG): container finished" podID="32809759-cb8c-4895-9ec0-662d6577e350" containerID="a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a" exitCode=2 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.718725 4869 generic.go:334] "Generic (PLEG): container finished" podID="fa348286-4ed9-4e11-8b48-6999c63429f6" containerID="b9854c741b5d7d00c6db01564efaac34ca0749b10b3c8156c7c588e5b187cda7" exitCode=0 Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.730324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerDied","Data":"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a"} Mar 14 09:21:41 crc kubenswrapper[4869]: I0314 09:21:41.730390 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ft85m" event={"ID":"fa348286-4ed9-4e11-8b48-6999c63429f6","Type":"ContainerDied","Data":"b9854c741b5d7d00c6db01564efaac34ca0749b10b3c8156c7c588e5b187cda7"} Mar 14 09:21:42 crc kubenswrapper[4869]: I0314 09:21:42.729699 4869 generic.go:334] "Generic (PLEG): container finished" podID="32809759-cb8c-4895-9ec0-662d6577e350" containerID="675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523" exitCode=0 Mar 14 09:21:42 crc kubenswrapper[4869]: I0314 09:21:42.729736 4869 generic.go:334] "Generic (PLEG): container finished" podID="32809759-cb8c-4895-9ec0-662d6577e350" containerID="a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830" exitCode=0 Mar 14 09:21:42 crc kubenswrapper[4869]: I0314 09:21:42.729843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerDied","Data":"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523"} Mar 14 09:21:42 crc kubenswrapper[4869]: I0314 09:21:42.729877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerDied","Data":"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830"} Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.134661 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.185465 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data\") pod \"fa348286-4ed9-4e11-8b48-6999c63429f6\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.185561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts\") pod \"fa348286-4ed9-4e11-8b48-6999c63429f6\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.185660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle\") pod \"fa348286-4ed9-4e11-8b48-6999c63429f6\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.185797 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25tcb\" (UniqueName: \"kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb\") pod \"fa348286-4ed9-4e11-8b48-6999c63429f6\" (UID: \"fa348286-4ed9-4e11-8b48-6999c63429f6\") " Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.193391 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts" (OuterVolumeSpecName: "scripts") pod "fa348286-4ed9-4e11-8b48-6999c63429f6" (UID: "fa348286-4ed9-4e11-8b48-6999c63429f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.201883 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb" (OuterVolumeSpecName: "kube-api-access-25tcb") pod "fa348286-4ed9-4e11-8b48-6999c63429f6" (UID: "fa348286-4ed9-4e11-8b48-6999c63429f6"). InnerVolumeSpecName "kube-api-access-25tcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.215770 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data" (OuterVolumeSpecName: "config-data") pod "fa348286-4ed9-4e11-8b48-6999c63429f6" (UID: "fa348286-4ed9-4e11-8b48-6999c63429f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.224582 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa348286-4ed9-4e11-8b48-6999c63429f6" (UID: "fa348286-4ed9-4e11-8b48-6999c63429f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.289791 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.289840 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.289856 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa348286-4ed9-4e11-8b48-6999c63429f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.289870 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25tcb\" (UniqueName: \"kubernetes.io/projected/fa348286-4ed9-4e11-8b48-6999c63429f6-kube-api-access-25tcb\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.739582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ft85m" event={"ID":"fa348286-4ed9-4e11-8b48-6999c63429f6","Type":"ContainerDied","Data":"8c6f5a495829c711329ada1fc941fbd99806a459b43e5760d5280dee8521f576"} Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.739930 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6f5a495829c711329ada1fc941fbd99806a459b43e5760d5280dee8521f576" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.740001 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ft85m" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.874988 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:43 crc kubenswrapper[4869]: E0314 09:21:43.875387 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa348286-4ed9-4e11-8b48-6999c63429f6" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.875405 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa348286-4ed9-4e11-8b48-6999c63429f6" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.876013 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa348286-4ed9-4e11-8b48-6999c63429f6" containerName="nova-cell0-conductor-db-sync" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.876833 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.880634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hdhz" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.881091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 09:21:43 crc kubenswrapper[4869]: I0314 09:21:43.897207 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.003234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24df6\" (UniqueName: \"kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.003358 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.003420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.105793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24df6\" (UniqueName: \"kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.105892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.105935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.121424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.123327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.123936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24df6\" (UniqueName: \"kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6\") pod \"nova-cell0-conductor-0\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.197178 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.230296 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.479084 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518302 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518478 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518720 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518822 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfskv\" (UniqueName: \"kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.518964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.519045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.519208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.519225 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.519326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts\") pod \"32809759-cb8c-4895-9ec0-662d6577e350\" (UID: \"32809759-cb8c-4895-9ec0-662d6577e350\") " Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.526641 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.526703 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32809759-cb8c-4895-9ec0-662d6577e350-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.527782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts" (OuterVolumeSpecName: "scripts") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.532778 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv" (OuterVolumeSpecName: "kube-api-access-bfskv") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "kube-api-access-bfskv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.630272 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.630716 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfskv\" (UniqueName: \"kubernetes.io/projected/32809759-cb8c-4895-9ec0-662d6577e350-kube-api-access-bfskv\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.646818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.648704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.662684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.699093 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data" (OuterVolumeSpecName: "config-data") pod "32809759-cb8c-4895-9ec0-662d6577e350" (UID: "32809759-cb8c-4895-9ec0-662d6577e350"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.732417 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.732454 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.732464 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.732473 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32809759-cb8c-4895-9ec0-662d6577e350-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.752847 4869 generic.go:334] "Generic (PLEG): container finished" podID="32809759-cb8c-4895-9ec0-662d6577e350" containerID="28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb" exitCode=0 Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.752908 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerDied","Data":"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb"} Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.752975 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32809759-cb8c-4895-9ec0-662d6577e350","Type":"ContainerDied","Data":"ee72ab74041d35acb310c790eb6f38e5447f2f736659723fc953e23d430b1f15"} Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.753003 4869 scope.go:117] "RemoveContainer" containerID="675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.754320 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.775885 4869 scope.go:117] "RemoveContainer" containerID="a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.800354 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.813892 4869 scope.go:117] "RemoveContainer" containerID="28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.820692 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.843578 4869 scope.go:117] "RemoveContainer" containerID="a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.848726 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.849249 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="proxy-httpd" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849273 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="proxy-httpd" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.849303 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-central-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849310 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-central-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.849337 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-notification-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849343 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-notification-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.849352 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="sg-core" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849359 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="sg-core" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849598 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="proxy-httpd" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849634 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="sg-core" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849648 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-notification-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.849672 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="32809759-cb8c-4895-9ec0-662d6577e350" containerName="ceilometer-central-agent" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.852179 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.859753 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.860280 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.860336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.887970 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.896737 4869 scope.go:117] "RemoveContainer" containerID="675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.897287 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523\": container with ID starting with 675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523 not found: ID does not exist" containerID="675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.897399 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523"} err="failed to get container status \"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523\": rpc error: code = NotFound desc = could not find container \"675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523\": container with ID starting with 675a8c59ed0c71ce3ee8ba0c97b675bf47c0f8756d23ab7a09a2ad335b94f523 not found: ID does not exist" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.897525 4869 scope.go:117] "RemoveContainer" containerID="a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.897920 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a\": container with ID starting with a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a not found: ID does not exist" containerID="a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.897978 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a"} err="failed to get container status \"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a\": rpc error: code = NotFound desc = could not find container \"a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a\": container with ID starting with a264a862fc745ed757964e5e395fe527522116d413a623aa948df6f8a9e14d3a not found: ID does not exist" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.898017 4869 scope.go:117] "RemoveContainer" containerID="28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.899842 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb\": container with ID starting with 28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb not found: ID does not exist" containerID="28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.899998 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb"} err="failed to get container status \"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb\": rpc error: code = NotFound desc = could not find container \"28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb\": container with ID starting with 28610f8896443bbfc6c88f24cfa20c9faff27c58601ac7978177df60a03ba7fb not found: ID does not exist" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.900096 4869 scope.go:117] "RemoveContainer" containerID="a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830" Mar 14 09:21:44 crc kubenswrapper[4869]: E0314 09:21:44.901290 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830\": container with ID starting with a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830 not found: ID does not exist" containerID="a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.901374 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830"} err="failed to get container status \"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830\": rpc error: code = NotFound desc = could not find container \"a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830\": container with ID starting with a61a51053f46f4ec5a7b93c1d4ab720907c1caf43bc95f6844421c5b042a8830 not found: ID does not exist" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.903141 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.935841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.935903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.936289 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.936601 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.936698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.936945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.936972 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhzs\" (UniqueName: \"kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:44 crc kubenswrapper[4869]: I0314 09:21:44.937208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhzs\" (UniqueName: \"kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039821 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.039959 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.040008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.040060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.040596 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.040702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.043792 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.044080 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.044252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.044475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.045356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.059144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhzs\" (UniqueName: \"kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs\") pod \"ceilometer-0\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.174637 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.632463 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:45 crc kubenswrapper[4869]: W0314 09:21:45.634838 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9d4a35_a73c_44d4_a767_64c6aba10b42.slice/crio-8f337b863675b104d58cc9595ecdbeee3e043be85f1d9871ecbde82b6f15ba23 WatchSource:0}: Error finding container 8f337b863675b104d58cc9595ecdbeee3e043be85f1d9871ecbde82b6f15ba23: Status 404 returned error can't find the container with id 8f337b863675b104d58cc9595ecdbeee3e043be85f1d9871ecbde82b6f15ba23 Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.722751 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32809759-cb8c-4895-9ec0-662d6577e350" path="/var/lib/kubelet/pods/32809759-cb8c-4895-9ec0-662d6577e350/volumes" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.782870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerStarted","Data":"8f337b863675b104d58cc9595ecdbeee3e043be85f1d9871ecbde82b6f15ba23"} Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.789125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f87b32c-d592-4060-b041-67b0d9a0bd25","Type":"ContainerStarted","Data":"1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d"} Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.789281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f87b32c-d592-4060-b041-67b0d9a0bd25","Type":"ContainerStarted","Data":"ea97425487c1668640c8527d00ce00fd1198dc8bd16179b33415f2c1ac838e94"} Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.789557 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" containerID="cri-o://1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" gracePeriod=30 Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.789835 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.822374 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:45 crc kubenswrapper[4869]: I0314 09:21:45.828122 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.828092116 podStartE2EDuration="2.828092116s" podCreationTimestamp="2026-03-14 09:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:21:45.813904196 +0000 UTC m=+1458.786186249" watchObservedRunningTime="2026-03-14 09:21:45.828092116 +0000 UTC m=+1458.800374179" Mar 14 09:21:46 crc kubenswrapper[4869]: I0314 09:21:46.809929 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerStarted","Data":"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c"} Mar 14 09:21:46 crc kubenswrapper[4869]: I0314 09:21:46.810537 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerStarted","Data":"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3"} Mar 14 09:21:46 crc kubenswrapper[4869]: I0314 09:21:46.810554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerStarted","Data":"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51"} Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.831601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerStarted","Data":"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1"} Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.833146 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.832233 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="sg-core" containerID="cri-o://8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c" gracePeriod=30 Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.832259 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-notification-agent" containerID="cri-o://13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3" gracePeriod=30 Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.832192 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-central-agent" containerID="cri-o://f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51" gracePeriod=30 Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.832021 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="proxy-httpd" containerID="cri-o://216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1" gracePeriod=30 Mar 14 09:21:48 crc kubenswrapper[4869]: I0314 09:21:48.865930 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.092244634 podStartE2EDuration="4.865897765s" podCreationTimestamp="2026-03-14 09:21:44 +0000 UTC" firstStartedPulling="2026-03-14 09:21:45.637714388 +0000 UTC m=+1458.609996441" lastFinishedPulling="2026-03-14 09:21:48.411367519 +0000 UTC m=+1461.383649572" observedRunningTime="2026-03-14 09:21:48.855011376 +0000 UTC m=+1461.827293429" watchObservedRunningTime="2026-03-14 09:21:48.865897765 +0000 UTC m=+1461.838179818" Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853555 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerID="216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1" exitCode=0 Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853586 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerID="8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c" exitCode=2 Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853596 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerID="13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3" exitCode=0 Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerDied","Data":"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1"} Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerDied","Data":"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c"} Mar 14 09:21:49 crc kubenswrapper[4869]: I0314 09:21:49.853646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerDied","Data":"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3"} Mar 14 09:21:50 crc kubenswrapper[4869]: I0314 09:21:50.705223 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:21:50 crc kubenswrapper[4869]: E0314 09:21:50.705959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:21:52 crc kubenswrapper[4869]: I0314 09:21:52.703860 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:21:52 crc kubenswrapper[4869]: E0314 09:21:52.704361 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:21:54 crc kubenswrapper[4869]: E0314 09:21:54.200664 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:54 crc kubenswrapper[4869]: E0314 09:21:54.202995 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:54 crc kubenswrapper[4869]: E0314 09:21:54.204467 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:54 crc kubenswrapper[4869]: E0314 09:21:54.204632 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.580106 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676465 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676612 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bhzs\" (UniqueName: \"kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676684 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.676764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs\") pod \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\" (UID: \"cd9d4a35-a73c-44d4-a767-64c6aba10b42\") " Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.677880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.678315 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.683730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts" (OuterVolumeSpecName: "scripts") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.683779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs" (OuterVolumeSpecName: "kube-api-access-8bhzs") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "kube-api-access-8bhzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.705174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.731967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.761411 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.776556 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data" (OuterVolumeSpecName: "config-data") pod "cd9d4a35-a73c-44d4-a767-64c6aba10b42" (UID: "cd9d4a35-a73c-44d4-a767-64c6aba10b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782787 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782914 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782932 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782946 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782958 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782969 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d4a35-a73c-44d4-a767-64c6aba10b42-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782978 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd9d4a35-a73c-44d4-a767-64c6aba10b42-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.782991 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bhzs\" (UniqueName: \"kubernetes.io/projected/cd9d4a35-a73c-44d4-a767-64c6aba10b42-kube-api-access-8bhzs\") on node \"crc\" DevicePath \"\"" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.906116 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerID="f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51" exitCode=0 Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.906159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerDied","Data":"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51"} Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.906186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd9d4a35-a73c-44d4-a767-64c6aba10b42","Type":"ContainerDied","Data":"8f337b863675b104d58cc9595ecdbeee3e043be85f1d9871ecbde82b6f15ba23"} Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.906203 4869 scope.go:117] "RemoveContainer" containerID="216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.906682 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.924842 4869 scope.go:117] "RemoveContainer" containerID="8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.940968 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.958961 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.969760 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:55 crc kubenswrapper[4869]: E0314 09:21:55.970262 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-central-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970277 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-central-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: E0314 09:21:55.970291 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-notification-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970299 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-notification-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: E0314 09:21:55.970333 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="proxy-httpd" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970339 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="proxy-httpd" Mar 14 09:21:55 crc kubenswrapper[4869]: E0314 09:21:55.970347 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="sg-core" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970353 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="sg-core" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970549 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-notification-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970559 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="sg-core" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970572 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="proxy-httpd" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.970593 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" containerName="ceilometer-central-agent" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.972122 4869 scope.go:117] "RemoveContainer" containerID="13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.972475 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.975320 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.976823 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.978107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.985945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986094 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvj7\" (UniqueName: \"kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986117 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986150 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.986252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:55 crc kubenswrapper[4869]: I0314 09:21:55.994788 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.002600 4869 scope.go:117] "RemoveContainer" containerID="f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.020921 4869 scope.go:117] "RemoveContainer" containerID="216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1" Mar 14 09:21:56 crc kubenswrapper[4869]: E0314 09:21:56.021399 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1\": container with ID starting with 216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1 not found: ID does not exist" containerID="216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.021442 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1"} err="failed to get container status \"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1\": rpc error: code = NotFound desc = could not find container \"216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1\": container with ID starting with 216daf6053625f1483a9e1105f29ed5cb112d37409f2b0f74a5b4093d5143fc1 not found: ID does not exist" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.021471 4869 scope.go:117] "RemoveContainer" containerID="8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c" Mar 14 09:21:56 crc kubenswrapper[4869]: E0314 09:21:56.021876 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c\": container with ID starting with 8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c not found: ID does not exist" containerID="8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.021913 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c"} err="failed to get container status \"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c\": rpc error: code = NotFound desc = could not find container \"8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c\": container with ID starting with 8e0a6aa0e803da3d1214089fb656fe0a32854b37fd181ea5bdad1b02c6ca577c not found: ID does not exist" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.021935 4869 scope.go:117] "RemoveContainer" containerID="13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3" Mar 14 09:21:56 crc kubenswrapper[4869]: E0314 09:21:56.022307 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3\": container with ID starting with 13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3 not found: ID does not exist" containerID="13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.022333 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3"} err="failed to get container status \"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3\": rpc error: code = NotFound desc = could not find container \"13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3\": container with ID starting with 13317d2d8f3d3eb7d188aabdbbfe80d58e375f57e6c1f03b8dbb411a36df9be3 not found: ID does not exist" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.022350 4869 scope.go:117] "RemoveContainer" containerID="f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51" Mar 14 09:21:56 crc kubenswrapper[4869]: E0314 09:21:56.022713 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51\": container with ID starting with f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51 not found: ID does not exist" containerID="f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.022746 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51"} err="failed to get container status \"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51\": rpc error: code = NotFound desc = could not find container \"f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51\": container with ID starting with f50d1f381adb4fb4349806bd2f1cbbfade1d08df9dab2b90035fc5865a097f51 not found: ID does not exist" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.087895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.087975 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvj7\" (UniqueName: \"kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.087996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.088034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.088081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.088102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.088130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.088229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.089073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.089185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.092447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.092961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.093719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.095010 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.101664 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.103806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvj7\" (UniqueName: \"kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7\") pod \"ceilometer-0\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.300009 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.750841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:21:56 crc kubenswrapper[4869]: I0314 09:21:56.916881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerStarted","Data":"c0e7356ec38bbfa6a85d0772b5f20f8aadf4faa04191ef1d3843832bca4b6a48"} Mar 14 09:21:57 crc kubenswrapper[4869]: I0314 09:21:57.715654 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9d4a35-a73c-44d4-a767-64c6aba10b42" path="/var/lib/kubelet/pods/cd9d4a35-a73c-44d4-a767-64c6aba10b42/volumes" Mar 14 09:21:57 crc kubenswrapper[4869]: I0314 09:21:57.928338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerStarted","Data":"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2"} Mar 14 09:21:57 crc kubenswrapper[4869]: I0314 09:21:57.928387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerStarted","Data":"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60"} Mar 14 09:21:57 crc kubenswrapper[4869]: I0314 09:21:57.928399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerStarted","Data":"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df"} Mar 14 09:21:59 crc kubenswrapper[4869]: E0314 09:21:59.201526 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:59 crc kubenswrapper[4869]: E0314 09:21:59.203181 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:59 crc kubenswrapper[4869]: E0314 09:21:59.204790 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:21:59 crc kubenswrapper[4869]: E0314 09:21:59.204916 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:21:59 crc kubenswrapper[4869]: I0314 09:21:59.960960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerStarted","Data":"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d"} Mar 14 09:21:59 crc kubenswrapper[4869]: I0314 09:21:59.961545 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:21:59 crc kubenswrapper[4869]: I0314 09:21:59.987888 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.222121608 podStartE2EDuration="4.987866423s" podCreationTimestamp="2026-03-14 09:21:55 +0000 UTC" firstStartedPulling="2026-03-14 09:21:56.756818107 +0000 UTC m=+1469.729100160" lastFinishedPulling="2026-03-14 09:21:59.522562922 +0000 UTC m=+1472.494844975" observedRunningTime="2026-03-14 09:21:59.979364064 +0000 UTC m=+1472.951646137" watchObservedRunningTime="2026-03-14 09:21:59.987866423 +0000 UTC m=+1472.960148476" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.159106 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558002-d5tn6"] Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.160761 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.162601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.162856 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.162982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.178047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-d5tn6"] Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.274996 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltpkk\" (UniqueName: \"kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk\") pod \"auto-csr-approver-29558002-d5tn6\" (UID: \"b06766ec-aff5-4eb7-983b-bfa7fdd84b72\") " pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.377762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltpkk\" (UniqueName: \"kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk\") pod \"auto-csr-approver-29558002-d5tn6\" (UID: \"b06766ec-aff5-4eb7-983b-bfa7fdd84b72\") " pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.398640 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltpkk\" (UniqueName: \"kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk\") pod \"auto-csr-approver-29558002-d5tn6\" (UID: \"b06766ec-aff5-4eb7-983b-bfa7fdd84b72\") " pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.478136 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.968558 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-d5tn6"] Mar 14 09:22:00 crc kubenswrapper[4869]: I0314 09:22:00.982778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" event={"ID":"b06766ec-aff5-4eb7-983b-bfa7fdd84b72","Type":"ContainerStarted","Data":"5354f7736701c79728afc2f2015985efc17d7dcdb033b6da7a11a05cbdb0ceda"} Mar 14 09:22:03 crc kubenswrapper[4869]: I0314 09:22:03.013128 4869 generic.go:334] "Generic (PLEG): container finished" podID="b06766ec-aff5-4eb7-983b-bfa7fdd84b72" containerID="6ed0f21547372c6f7cf2117f677a8f498581d2cce37740671961fcdda12ad084" exitCode=0 Mar 14 09:22:03 crc kubenswrapper[4869]: I0314 09:22:03.013291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" event={"ID":"b06766ec-aff5-4eb7-983b-bfa7fdd84b72","Type":"ContainerDied","Data":"6ed0f21547372c6f7cf2117f677a8f498581d2cce37740671961fcdda12ad084"} Mar 14 09:22:03 crc kubenswrapper[4869]: I0314 09:22:03.704364 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.026474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d"} Mar 14 09:22:04 crc kubenswrapper[4869]: E0314 09:22:04.201926 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:04 crc kubenswrapper[4869]: E0314 09:22:04.204135 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:04 crc kubenswrapper[4869]: E0314 09:22:04.207179 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:04 crc kubenswrapper[4869]: E0314 09:22:04.207243 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.404389 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.404495 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.423831 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.564797 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltpkk\" (UniqueName: \"kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk\") pod \"b06766ec-aff5-4eb7-983b-bfa7fdd84b72\" (UID: \"b06766ec-aff5-4eb7-983b-bfa7fdd84b72\") " Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.570712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk" (OuterVolumeSpecName: "kube-api-access-ltpkk") pod "b06766ec-aff5-4eb7-983b-bfa7fdd84b72" (UID: "b06766ec-aff5-4eb7-983b-bfa7fdd84b72"). InnerVolumeSpecName "kube-api-access-ltpkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.667372 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltpkk\" (UniqueName: \"kubernetes.io/projected/b06766ec-aff5-4eb7-983b-bfa7fdd84b72-kube-api-access-ltpkk\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:04 crc kubenswrapper[4869]: I0314 09:22:04.703743 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.037034 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" event={"ID":"b06766ec-aff5-4eb7-983b-bfa7fdd84b72","Type":"ContainerDied","Data":"5354f7736701c79728afc2f2015985efc17d7dcdb033b6da7a11a05cbdb0ceda"} Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.037072 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5354f7736701c79728afc2f2015985efc17d7dcdb033b6da7a11a05cbdb0ceda" Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.037123 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558002-d5tn6" Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.043569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d"} Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.500478 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-f24j7"] Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.513141 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557996-f24j7"] Mar 14 09:22:05 crc kubenswrapper[4869]: I0314 09:22:05.718079 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf7591e-7c0f-48eb-a174-b926b51c75a5" path="/var/lib/kubelet/pods/acf7591e-7c0f-48eb-a174-b926b51c75a5/volumes" Mar 14 09:22:09 crc kubenswrapper[4869]: E0314 09:22:09.201567 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:09 crc kubenswrapper[4869]: E0314 09:22:09.206701 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:09 crc kubenswrapper[4869]: E0314 09:22:09.208600 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:09 crc kubenswrapper[4869]: E0314 09:22:09.208642 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:13 crc kubenswrapper[4869]: I0314 09:22:13.136049 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" exitCode=1 Mar 14 09:22:13 crc kubenswrapper[4869]: I0314 09:22:13.137061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d"} Mar 14 09:22:13 crc kubenswrapper[4869]: I0314 09:22:13.137125 4869 scope.go:117] "RemoveContainer" containerID="7744b0b811e4f19feaf92d7340f64193a117d543b2fec0bdfb940608e350e976" Mar 14 09:22:13 crc kubenswrapper[4869]: I0314 09:22:13.138552 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:22:13 crc kubenswrapper[4869]: E0314 09:22:13.138955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.154233 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" exitCode=1 Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.154841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d"} Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.154917 4869 scope.go:117] "RemoveContainer" containerID="4e13a4d5505bd614dfaa20ecad855cc61f8d04a8b69a6131aa7b405fd4e18565" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.156104 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.156428 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.205562 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.207358 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.216049 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.216192 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.404681 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.404771 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.405839 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:22:14 crc kubenswrapper[4869]: E0314 09:22:14.406051 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.538861 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.538959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.539004 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:22:14 crc kubenswrapper[4869]: I0314 09:22:14.539014 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:22:15 crc kubenswrapper[4869]: I0314 09:22:15.169349 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:15 crc kubenswrapper[4869]: E0314 09:22:15.169814 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.178826 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" exitCode=137 Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.179208 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f87b32c-d592-4060-b041-67b0d9a0bd25","Type":"ContainerDied","Data":"1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d"} Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.179754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8f87b32c-d592-4060-b041-67b0d9a0bd25","Type":"ContainerDied","Data":"ea97425487c1668640c8527d00ce00fd1198dc8bd16179b33415f2c1ac838e94"} Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.179789 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea97425487c1668640c8527d00ce00fd1198dc8bd16179b33415f2c1ac838e94" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.180410 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:16 crc kubenswrapper[4869]: E0314 09:22:16.180696 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.243589 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.331836 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24df6\" (UniqueName: \"kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6\") pod \"8f87b32c-d592-4060-b041-67b0d9a0bd25\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.331928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle\") pod \"8f87b32c-d592-4060-b041-67b0d9a0bd25\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.332009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data\") pod \"8f87b32c-d592-4060-b041-67b0d9a0bd25\" (UID: \"8f87b32c-d592-4060-b041-67b0d9a0bd25\") " Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.340723 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6" (OuterVolumeSpecName: "kube-api-access-24df6") pod "8f87b32c-d592-4060-b041-67b0d9a0bd25" (UID: "8f87b32c-d592-4060-b041-67b0d9a0bd25"). InnerVolumeSpecName "kube-api-access-24df6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.363299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data" (OuterVolumeSpecName: "config-data") pod "8f87b32c-d592-4060-b041-67b0d9a0bd25" (UID: "8f87b32c-d592-4060-b041-67b0d9a0bd25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.364833 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f87b32c-d592-4060-b041-67b0d9a0bd25" (UID: "8f87b32c-d592-4060-b041-67b0d9a0bd25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.434469 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24df6\" (UniqueName: \"kubernetes.io/projected/8f87b32c-d592-4060-b041-67b0d9a0bd25-kube-api-access-24df6\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.434517 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:16 crc kubenswrapper[4869]: I0314 09:22:16.434529 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f87b32c-d592-4060-b041-67b0d9a0bd25-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.188937 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.227575 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.238841 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.251137 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:22:17 crc kubenswrapper[4869]: E0314 09:22:17.251773 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06766ec-aff5-4eb7-983b-bfa7fdd84b72" containerName="oc" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.251793 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06766ec-aff5-4eb7-983b-bfa7fdd84b72" containerName="oc" Mar 14 09:22:17 crc kubenswrapper[4869]: E0314 09:22:17.251812 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.251821 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.252104 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06766ec-aff5-4eb7-983b-bfa7fdd84b72" containerName="oc" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.252135 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" containerName="nova-cell0-conductor-conductor" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.254581 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.256984 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.257300 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hdhz" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.267192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.355757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.355806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.356201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4fl\" (UniqueName: \"kubernetes.io/projected/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-kube-api-access-sn4fl\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.458069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.458119 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.458223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4fl\" (UniqueName: \"kubernetes.io/projected/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-kube-api-access-sn4fl\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.463967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.464468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.475049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4fl\" (UniqueName: \"kubernetes.io/projected/1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc-kube-api-access-sn4fl\") pod \"nova-cell0-conductor-0\" (UID: \"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc\") " pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.576474 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.719605 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f87b32c-d592-4060-b041-67b0d9a0bd25" path="/var/lib/kubelet/pods/8f87b32c-d592-4060-b041-67b0d9a0bd25/volumes" Mar 14 09:22:17 crc kubenswrapper[4869]: I0314 09:22:17.856286 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 14 09:22:18 crc kubenswrapper[4869]: I0314 09:22:18.202939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc","Type":"ContainerStarted","Data":"003c7c73e6b854bb6f81fe68e50ec2d22684a2cd271e6b9febdb3d6e5aca1f0d"} Mar 14 09:22:18 crc kubenswrapper[4869]: I0314 09:22:18.203379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc","Type":"ContainerStarted","Data":"2e73b08cf3214a33562d3137afc4baa2da32880575542e4b3277aad248be8709"} Mar 14 09:22:18 crc kubenswrapper[4869]: I0314 09:22:18.203403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:18 crc kubenswrapper[4869]: I0314 09:22:18.224431 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.22441241 podStartE2EDuration="1.22441241s" podCreationTimestamp="2026-03-14 09:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:18.217091819 +0000 UTC m=+1491.189373882" watchObservedRunningTime="2026-03-14 09:22:18.22441241 +0000 UTC m=+1491.196694463" Mar 14 09:22:26 crc kubenswrapper[4869]: I0314 09:22:26.319550 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:22:27 crc kubenswrapper[4869]: I0314 09:22:27.616095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 14 09:22:27 crc kubenswrapper[4869]: I0314 09:22:27.729364 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:22:27 crc kubenswrapper[4869]: E0314 09:22:27.729949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.186589 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xnjqv"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.188408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.202922 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.203617 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.215856 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xnjqv"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.303893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.304152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmd7\" (UniqueName: \"kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.304258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.304350 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.406816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.406973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmd7\" (UniqueName: \"kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.407028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.407093 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.417582 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.417920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.474265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.490157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmd7\" (UniqueName: \"kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7\") pod \"nova-cell0-cell-mapping-xnjqv\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.514946 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.516237 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.517832 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.552161 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.606551 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.624420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.624549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.624582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.624728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89mz\" (UniqueName: \"kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.631984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.635333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.649076 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.705972 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:28 crc kubenswrapper[4869]: E0314 09:22:28.706241 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2p2j\" (UniqueName: \"kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.729469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89mz\" (UniqueName: \"kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.731018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.740267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.740327 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.748166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.776162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89mz\" (UniqueName: \"kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz\") pod \"nova-api-0\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " pod="openstack/nova-api-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.799612 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.801475 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.809205 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.831597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2p2j\" (UniqueName: \"kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.831726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.831755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.854428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.855930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.883133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2p2j\" (UniqueName: \"kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j\") pod \"nova-scheduler-0\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.891079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.934002 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.934373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wd64\" (UniqueName: \"kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.934401 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.934546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.972489 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:22:28 crc kubenswrapper[4869]: I0314 09:22:28.974382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:28.983765 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:28.990474 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:28.992402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.009399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.029929 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.045898 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.045980 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wd64\" (UniqueName: \"kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046172 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046247 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jdl\" (UniqueName: \"kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046317 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.046344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.047908 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.061888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.070146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.081330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wd64\" (UniqueName: \"kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64\") pod \"nova-metadata-0\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.088822 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.131116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.136615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.147836 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.147966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k725b\" (UniqueName: \"kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148662 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jdl\" (UniqueName: \"kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.148996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.152928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.152939 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.155125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.168311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.171644 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.204141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jdl\" (UniqueName: \"kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl\") pod \"dnsmasq-dns-784c8c5dcf-6dcv7\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.253039 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.253457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k725b\" (UniqueName: \"kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.253632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.258272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.263638 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.272267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k725b\" (UniqueName: \"kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b\") pod \"nova-cell1-novncproxy-0\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.352331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.385731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:29 crc kubenswrapper[4869]: W0314 09:22:29.567670 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31c5dd4a_369c_43a8_9d96_b67997800a45.slice/crio-1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084 WatchSource:0}: Error finding container 1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084: Status 404 returned error can't find the container with id 1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084 Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.588280 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xnjqv"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.636766 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.870757 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k8p4c"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.872893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.882290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.882344 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 09:22:29 crc kubenswrapper[4869]: W0314 09:22:29.902852 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15315065_aae1_4408_9929_5183df48226b.slice/crio-3e2f1e81503adc0fe08478101f21f16d1b3e20acac23c02b644d5b1cff9125f7 WatchSource:0}: Error finding container 3e2f1e81503adc0fe08478101f21f16d1b3e20acac23c02b644d5b1cff9125f7: Status 404 returned error can't find the container with id 3e2f1e81503adc0fe08478101f21f16d1b3e20acac23c02b644d5b1cff9125f7 Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.903654 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k8p4c"] Mar 14 09:22:29 crc kubenswrapper[4869]: W0314 09:22:29.910925 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62664fd0_e198_48ae_add2_1c1918f1a697.slice/crio-83d941055f75fdf61e640c494e18b314de25c66a3dd4cc165e8c62b6df976128 WatchSource:0}: Error finding container 83d941055f75fdf61e640c494e18b314de25c66a3dd4cc165e8c62b6df976128: Status 404 returned error can't find the container with id 83d941055f75fdf61e640c494e18b314de25c66a3dd4cc165e8c62b6df976128 Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.922948 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.948172 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.970171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.970233 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqm4w\" (UniqueName: \"kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.970304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:29 crc kubenswrapper[4869]: I0314 09:22:29.970346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.072447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.072586 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.072715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.072769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqm4w\" (UniqueName: \"kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.079941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.080132 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.082714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.097692 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqm4w\" (UniqueName: \"kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w\") pod \"nova-cell1-conductor-db-sync-k8p4c\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.171870 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.191759 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.197046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.329443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" event={"ID":"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45","Type":"ContainerStarted","Data":"7af3acecc5057866a1376756098d80f3946c89b937836eefcce94322ae9a7537"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.342840 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1303f846-c5e7-483c-963d-00ba423883b1","Type":"ContainerStarted","Data":"8bcd350b677ff8b9289ab8e725ffc26bd3d34439bfb016766465699b178ae67d"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.349860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15315065-aae1-4408-9929-5183df48226b","Type":"ContainerStarted","Data":"3e2f1e81503adc0fe08478101f21f16d1b3e20acac23c02b644d5b1cff9125f7"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.353307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerStarted","Data":"83d941055f75fdf61e640c494e18b314de25c66a3dd4cc165e8c62b6df976128"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.358413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xnjqv" event={"ID":"31c5dd4a-369c-43a8-9d96-b67997800a45","Type":"ContainerStarted","Data":"0fd11c30969a4181b257eb1ca7ccfc35a87f6f858eaa0821edd30f27cb9e9e12"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.358467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xnjqv" event={"ID":"31c5dd4a-369c-43a8-9d96-b67997800a45","Type":"ContainerStarted","Data":"1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.380849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerStarted","Data":"b686226706ea44ae27ff5905d38110f77fb9ad07fc83ab5b8d4eb7cae9d98afe"} Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.387016 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xnjqv" podStartSLOduration=2.386989163 podStartE2EDuration="2.386989163s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:30.376835673 +0000 UTC m=+1503.349117726" watchObservedRunningTime="2026-03-14 09:22:30.386989163 +0000 UTC m=+1503.359271216" Mar 14 09:22:30 crc kubenswrapper[4869]: I0314 09:22:30.859612 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k8p4c"] Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.409886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" event={"ID":"868be304-0fd3-401b-8f0d-c1997da82c45","Type":"ContainerStarted","Data":"2f43876528e28de0b50e043a4ec6e9fbb8e890e50ef96f7dec215876b2468a59"} Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.410295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" event={"ID":"868be304-0fd3-401b-8f0d-c1997da82c45","Type":"ContainerStarted","Data":"2232991e0259a79da4fd58b27da218f39d1ca7e9a62d65bd85745bc61a0ec56b"} Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.413019 4869 generic.go:334] "Generic (PLEG): container finished" podID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerID="9fc48c4d9645c78ee2ad30c8675604fd6b17d956a4d55a4b45d723ec165e633a" exitCode=0 Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.413087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" event={"ID":"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45","Type":"ContainerDied","Data":"9fc48c4d9645c78ee2ad30c8675604fd6b17d956a4d55a4b45d723ec165e633a"} Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.429446 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" podStartSLOduration=2.429416397 podStartE2EDuration="2.429416397s" podCreationTimestamp="2026-03-14 09:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:31.425360067 +0000 UTC m=+1504.397642120" watchObservedRunningTime="2026-03-14 09:22:31.429416397 +0000 UTC m=+1504.401698450" Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.884926 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.887897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.895552 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.947970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24rd\" (UniqueName: \"kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.948337 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:31 crc kubenswrapper[4869]: I0314 09:22:31.949891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.052385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g24rd\" (UniqueName: \"kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.052475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.052696 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.053233 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.053274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.080707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24rd\" (UniqueName: \"kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd\") pod \"redhat-operators-8b998\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.228129 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.955801 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:22:32 crc kubenswrapper[4869]: I0314 09:22:32.985706 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:22:37 crc kubenswrapper[4869]: W0314 09:22:37.840048 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8bfcd6_918b_4007_a900_e44c8628c2ed.slice/crio-84b33f42fa53c34c496e2865ee33a578dfad4a7a25481b5521d1e320cfff8cd5 WatchSource:0}: Error finding container 84b33f42fa53c34c496e2865ee33a578dfad4a7a25481b5521d1e320cfff8cd5: Status 404 returned error can't find the container with id 84b33f42fa53c34c496e2865ee33a578dfad4a7a25481b5521d1e320cfff8cd5 Mar 14 09:22:37 crc kubenswrapper[4869]: I0314 09:22:37.841763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.512377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerStarted","Data":"582e654bf3c9642a52dfe82f2fd2ee14bb5bba770cc7cb126de42d4c2a23e581"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.512671 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerStarted","Data":"ac083ae7b59cc20181999c7086b34921b8a9fd679f0cd4da835f84294ad3c8a7"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.512568 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-metadata" containerID="cri-o://582e654bf3c9642a52dfe82f2fd2ee14bb5bba770cc7cb126de42d4c2a23e581" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.512502 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-log" containerID="cri-o://ac083ae7b59cc20181999c7086b34921b8a9fd679f0cd4da835f84294ad3c8a7" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.520385 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerStarted","Data":"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.520745 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerStarted","Data":"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.524751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" event={"ID":"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45","Type":"ContainerStarted","Data":"173939437de19f433a974f24f049841bfaa51d521435a9b1a08c89e216a07807"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.525001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.527036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1303f846-c5e7-483c-963d-00ba423883b1","Type":"ContainerStarted","Data":"07c7be6f206163639b2b94c9a261be6b20e585477b735d3f95d562505aebe380"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.527149 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1303f846-c5e7-483c-963d-00ba423883b1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://07c7be6f206163639b2b94c9a261be6b20e585477b735d3f95d562505aebe380" gracePeriod=30 Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.532594 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.931402414 podStartE2EDuration="10.532577087s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="2026-03-14 09:22:29.913268788 +0000 UTC m=+1502.885550841" lastFinishedPulling="2026-03-14 09:22:37.514443461 +0000 UTC m=+1510.486725514" observedRunningTime="2026-03-14 09:22:38.531580462 +0000 UTC m=+1511.503862535" watchObservedRunningTime="2026-03-14 09:22:38.532577087 +0000 UTC m=+1511.504859150" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.534373 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerID="f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc" exitCode=0 Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.534459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerDied","Data":"f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.534494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerStarted","Data":"84b33f42fa53c34c496e2865ee33a578dfad4a7a25481b5521d1e320cfff8cd5"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.539338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15315065-aae1-4408-9929-5183df48226b","Type":"ContainerStarted","Data":"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa"} Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.560620 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.750268445 podStartE2EDuration="10.560602328s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="2026-03-14 09:22:29.745899559 +0000 UTC m=+1502.718181612" lastFinishedPulling="2026-03-14 09:22:37.556233432 +0000 UTC m=+1510.528515495" observedRunningTime="2026-03-14 09:22:38.551320179 +0000 UTC m=+1511.523602232" watchObservedRunningTime="2026-03-14 09:22:38.560602328 +0000 UTC m=+1511.532884381" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.582707 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.432091453 podStartE2EDuration="10.582688482s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="2026-03-14 09:22:30.163286115 +0000 UTC m=+1503.135568168" lastFinishedPulling="2026-03-14 09:22:37.313883144 +0000 UTC m=+1510.286165197" observedRunningTime="2026-03-14 09:22:38.569880836 +0000 UTC m=+1511.542162889" watchObservedRunningTime="2026-03-14 09:22:38.582688482 +0000 UTC m=+1511.554970535" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.598873 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" podStartSLOduration=10.598844131 podStartE2EDuration="10.598844131s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:38.591768137 +0000 UTC m=+1511.564050220" watchObservedRunningTime="2026-03-14 09:22:38.598844131 +0000 UTC m=+1511.571126204" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.639528 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.193371285 podStartE2EDuration="10.639486944s" podCreationTimestamp="2026-03-14 09:22:28 +0000 UTC" firstStartedPulling="2026-03-14 09:22:29.910913539 +0000 UTC m=+1502.883195592" lastFinishedPulling="2026-03-14 09:22:37.357029198 +0000 UTC m=+1510.329311251" observedRunningTime="2026-03-14 09:22:38.617358718 +0000 UTC m=+1511.589640771" watchObservedRunningTime="2026-03-14 09:22:38.639486944 +0000 UTC m=+1511.611768997" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.984716 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:38 crc kubenswrapper[4869]: I0314 09:22:38.984769 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.033894 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.033951 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.076053 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.137746 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.137796 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.386053 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.566376 4869 generic.go:334] "Generic (PLEG): container finished" podID="62664fd0-e198-48ae-add2-1c1918f1a697" containerID="ac083ae7b59cc20181999c7086b34921b8a9fd679f0cd4da835f84294ad3c8a7" exitCode=143 Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.566535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerDied","Data":"ac083ae7b59cc20181999c7086b34921b8a9fd679f0cd4da835f84294ad3c8a7"} Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.571439 4869 generic.go:334] "Generic (PLEG): container finished" podID="31c5dd4a-369c-43a8-9d96-b67997800a45" containerID="0fd11c30969a4181b257eb1ca7ccfc35a87f6f858eaa0821edd30f27cb9e9e12" exitCode=0 Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.571525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xnjqv" event={"ID":"31c5dd4a-369c-43a8-9d96-b67997800a45","Type":"ContainerDied","Data":"0fd11c30969a4181b257eb1ca7ccfc35a87f6f858eaa0821edd30f27cb9e9e12"} Mar 14 09:22:39 crc kubenswrapper[4869]: I0314 09:22:39.626973 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:22:40 crc kubenswrapper[4869]: I0314 09:22:40.067844 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:40 crc kubenswrapper[4869]: I0314 09:22:40.067872 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:40 crc kubenswrapper[4869]: I0314 09:22:40.583679 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerStarted","Data":"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508"} Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.047877 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.184201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmmd7\" (UniqueName: \"kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7\") pod \"31c5dd4a-369c-43a8-9d96-b67997800a45\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.184265 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle\") pod \"31c5dd4a-369c-43a8-9d96-b67997800a45\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.184413 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data\") pod \"31c5dd4a-369c-43a8-9d96-b67997800a45\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.184467 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts\") pod \"31c5dd4a-369c-43a8-9d96-b67997800a45\" (UID: \"31c5dd4a-369c-43a8-9d96-b67997800a45\") " Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.191578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts" (OuterVolumeSpecName: "scripts") pod "31c5dd4a-369c-43a8-9d96-b67997800a45" (UID: "31c5dd4a-369c-43a8-9d96-b67997800a45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.193584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7" (OuterVolumeSpecName: "kube-api-access-wmmd7") pod "31c5dd4a-369c-43a8-9d96-b67997800a45" (UID: "31c5dd4a-369c-43a8-9d96-b67997800a45"). InnerVolumeSpecName "kube-api-access-wmmd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.225747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31c5dd4a-369c-43a8-9d96-b67997800a45" (UID: "31c5dd4a-369c-43a8-9d96-b67997800a45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.236823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data" (OuterVolumeSpecName: "config-data") pod "31c5dd4a-369c-43a8-9d96-b67997800a45" (UID: "31c5dd4a-369c-43a8-9d96-b67997800a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.288544 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmmd7\" (UniqueName: \"kubernetes.io/projected/31c5dd4a-369c-43a8-9d96-b67997800a45-kube-api-access-wmmd7\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.289001 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.289199 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.289372 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31c5dd4a-369c-43a8-9d96-b67997800a45-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.598321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xnjqv" event={"ID":"31c5dd4a-369c-43a8-9d96-b67997800a45","Type":"ContainerDied","Data":"1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084"} Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.598366 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3f1f88de5c80a78dc6e641dee764f7ca7322ba0f01246e7249b068cb47e084" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.599417 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xnjqv" Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.781682 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.782003 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-log" containerID="cri-o://bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8" gracePeriod=30 Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.782104 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-api" containerID="cri-o://6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9" gracePeriod=30 Mar 14 09:22:41 crc kubenswrapper[4869]: I0314 09:22:41.792672 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.612942 4869 generic.go:334] "Generic (PLEG): container finished" podID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerID="bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8" exitCode=143 Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.613018 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerDied","Data":"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8"} Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.615373 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerID="dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508" exitCode=0 Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.615470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerDied","Data":"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508"} Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.615706 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="15315065-aae1-4408-9929-5183df48226b" containerName="nova-scheduler-scheduler" containerID="cri-o://af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" gracePeriod=30 Mar 14 09:22:42 crc kubenswrapper[4869]: I0314 09:22:42.704352 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:22:42 crc kubenswrapper[4869]: E0314 09:22:42.705059 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:22:43 crc kubenswrapper[4869]: I0314 09:22:43.704897 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:43 crc kubenswrapper[4869]: E0314 09:22:43.705122 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:44 crc kubenswrapper[4869]: E0314 09:22:44.035450 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa is running failed: container process not found" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:44 crc kubenswrapper[4869]: E0314 09:22:44.036491 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa is running failed: container process not found" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:44 crc kubenswrapper[4869]: E0314 09:22:44.037815 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa is running failed: container process not found" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:22:44 crc kubenswrapper[4869]: E0314 09:22:44.037984 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="15315065-aae1-4408-9929-5183df48226b" containerName="nova-scheduler-scheduler" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.354706 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.461015 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.461235 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="dnsmasq-dns" containerID="cri-o://294ef10949f3b2561553f5e1a6547394e9a8c1ed183f46add858a1790ad88c8f" gracePeriod=10 Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.492782 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.562199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2p2j\" (UniqueName: \"kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j\") pod \"15315065-aae1-4408-9929-5183df48226b\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.562280 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data\") pod \"15315065-aae1-4408-9929-5183df48226b\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.562528 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle\") pod \"15315065-aae1-4408-9929-5183df48226b\" (UID: \"15315065-aae1-4408-9929-5183df48226b\") " Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.584967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j" (OuterVolumeSpecName: "kube-api-access-c2p2j") pod "15315065-aae1-4408-9929-5183df48226b" (UID: "15315065-aae1-4408-9929-5183df48226b"). InnerVolumeSpecName "kube-api-access-c2p2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.624831 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data" (OuterVolumeSpecName: "config-data") pod "15315065-aae1-4408-9929-5183df48226b" (UID: "15315065-aae1-4408-9929-5183df48226b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.656150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerStarted","Data":"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9"} Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.658888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15315065-aae1-4408-9929-5183df48226b" (UID: "15315065-aae1-4408-9929-5183df48226b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.660103 4869 generic.go:334] "Generic (PLEG): container finished" podID="15315065-aae1-4408-9929-5183df48226b" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" exitCode=0 Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.660228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15315065-aae1-4408-9929-5183df48226b","Type":"ContainerDied","Data":"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa"} Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.660287 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"15315065-aae1-4408-9929-5183df48226b","Type":"ContainerDied","Data":"3e2f1e81503adc0fe08478101f21f16d1b3e20acac23c02b644d5b1cff9125f7"} Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.660310 4869 scope.go:117] "RemoveContainer" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.660630 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.691686 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2p2j\" (UniqueName: \"kubernetes.io/projected/15315065-aae1-4408-9929-5183df48226b-kube-api-access-c2p2j\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.691713 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.691722 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15315065-aae1-4408-9929-5183df48226b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.701329 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerDied","Data":"294ef10949f3b2561553f5e1a6547394e9a8c1ed183f46add858a1790ad88c8f"} Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.702570 4869 generic.go:334] "Generic (PLEG): container finished" podID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerID="294ef10949f3b2561553f5e1a6547394e9a8c1ed183f46add858a1790ad88c8f" exitCode=0 Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.706407 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8b998" podStartSLOduration=8.557085078 podStartE2EDuration="13.7063918s" podCreationTimestamp="2026-03-14 09:22:31 +0000 UTC" firstStartedPulling="2026-03-14 09:22:38.536614525 +0000 UTC m=+1511.508896578" lastFinishedPulling="2026-03-14 09:22:43.685921247 +0000 UTC m=+1516.658203300" observedRunningTime="2026-03-14 09:22:44.703772246 +0000 UTC m=+1517.676054299" watchObservedRunningTime="2026-03-14 09:22:44.7063918 +0000 UTC m=+1517.678673853" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.739739 4869 scope.go:117] "RemoveContainer" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" Mar 14 09:22:45 crc kubenswrapper[4869]: E0314 09:22:44.740148 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa\": container with ID starting with af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa not found: ID does not exist" containerID="af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.740204 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa"} err="failed to get container status \"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa\": rpc error: code = NotFound desc = could not find container \"af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa\": container with ID starting with af5f72ead5b14708157d787207c9860f9c5c1bc6ba8511ea334e51d1847b63fa not found: ID does not exist" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.760890 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.776643 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.789612 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:45 crc kubenswrapper[4869]: E0314 09:22:44.790318 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c5dd4a-369c-43a8-9d96-b67997800a45" containerName="nova-manage" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.790335 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c5dd4a-369c-43a8-9d96-b67997800a45" containerName="nova-manage" Mar 14 09:22:45 crc kubenswrapper[4869]: E0314 09:22:44.790372 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15315065-aae1-4408-9929-5183df48226b" containerName="nova-scheduler-scheduler" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.790383 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="15315065-aae1-4408-9929-5183df48226b" containerName="nova-scheduler-scheduler" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.790667 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="15315065-aae1-4408-9929-5183df48226b" containerName="nova-scheduler-scheduler" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.790680 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c5dd4a-369c-43a8-9d96-b67997800a45" containerName="nova-manage" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.791667 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.795045 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.802900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.895117 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knd77\" (UniqueName: \"kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.895298 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.895542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.997817 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knd77\" (UniqueName: \"kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.997976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:44.998106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.003657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.003773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.015107 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knd77\" (UniqueName: \"kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77\") pod \"nova-scheduler-0\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.133624 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.724062 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15315065-aae1-4408-9929-5183df48226b" path="/var/lib/kubelet/pods/15315065-aae1-4408-9929-5183df48226b/volumes" Mar 14 09:22:45 crc kubenswrapper[4869]: I0314 09:22:45.864047 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017324 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017634 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.017717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qccs\" (UniqueName: \"kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs\") pod \"49f3fe18-ffd4-4273-97eb-98e94f198608\" (UID: \"49f3fe18-ffd4-4273-97eb-98e94f198608\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.026554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs" (OuterVolumeSpecName: "kube-api-access-9qccs") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "kube-api-access-9qccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.050050 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.078373 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.087586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.113263 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config" (OuterVolumeSpecName: "config") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.116687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.118295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49f3fe18-ffd4-4273-97eb-98e94f198608" (UID: "49f3fe18-ffd4-4273-97eb-98e94f198608"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124201 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124552 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124567 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qccs\" (UniqueName: \"kubernetes.io/projected/49f3fe18-ffd4-4273-97eb-98e94f198608-kube-api-access-9qccs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124583 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124594 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.124607 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49f3fe18-ffd4-4273-97eb-98e94f198608-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.452427 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.533657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs\") pod \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.533738 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data\") pod \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.534005 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle\") pod \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.534166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x89mz\" (UniqueName: \"kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz\") pod \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\" (UID: \"6aad5484-e5ed-42b0-9e91-89b0b5d4001c\") " Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.534922 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs" (OuterVolumeSpecName: "logs") pod "6aad5484-e5ed-42b0-9e91-89b0b5d4001c" (UID: "6aad5484-e5ed-42b0-9e91-89b0b5d4001c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.540644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz" (OuterVolumeSpecName: "kube-api-access-x89mz") pod "6aad5484-e5ed-42b0-9e91-89b0b5d4001c" (UID: "6aad5484-e5ed-42b0-9e91-89b0b5d4001c"). InnerVolumeSpecName "kube-api-access-x89mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.564796 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data" (OuterVolumeSpecName: "config-data") pod "6aad5484-e5ed-42b0-9e91-89b0b5d4001c" (UID: "6aad5484-e5ed-42b0-9e91-89b0b5d4001c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.580274 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aad5484-e5ed-42b0-9e91-89b0b5d4001c" (UID: "6aad5484-e5ed-42b0-9e91-89b0b5d4001c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.641124 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.641164 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x89mz\" (UniqueName: \"kubernetes.io/projected/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-kube-api-access-x89mz\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.641179 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.641191 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aad5484-e5ed-42b0-9e91-89b0b5d4001c-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.772834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1fb6f926-e8cc-491c-a982-a06813be3fba","Type":"ContainerStarted","Data":"ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d"} Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.773064 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1fb6f926-e8cc-491c-a982-a06813be3fba","Type":"ContainerStarted","Data":"237f6c2d9e49db5a1d9b3a13e9418b507ba1c98af97f0331205a7387e396d2d5"} Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.784792 4869 generic.go:334] "Generic (PLEG): container finished" podID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerID="6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9" exitCode=0 Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.784877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerDied","Data":"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9"} Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.784909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6aad5484-e5ed-42b0-9e91-89b0b5d4001c","Type":"ContainerDied","Data":"b686226706ea44ae27ff5905d38110f77fb9ad07fc83ab5b8d4eb7cae9d98afe"} Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.784928 4869 scope.go:117] "RemoveContainer" containerID="6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.784933 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.800633 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.80061024 podStartE2EDuration="2.80061024s" podCreationTimestamp="2026-03-14 09:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:46.797945914 +0000 UTC m=+1519.770227967" watchObservedRunningTime="2026-03-14 09:22:46.80061024 +0000 UTC m=+1519.772892293" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.815443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" event={"ID":"49f3fe18-ffd4-4273-97eb-98e94f198608","Type":"ContainerDied","Data":"0066cc4564f012cfaed467dcba567d0323f7a939c1b0edcde86aa1fbe57e1f41"} Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.815583 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457bb75c5-8j2q5" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.879706 4869 scope.go:117] "RemoveContainer" containerID="bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.904620 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.927656 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.943478 4869 scope.go:117] "RemoveContainer" containerID="6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9" Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.943949 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9\": container with ID starting with 6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9 not found: ID does not exist" containerID="6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.943989 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9"} err="failed to get container status \"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9\": rpc error: code = NotFound desc = could not find container \"6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9\": container with ID starting with 6c0ee9def5c019bbb770b20ce24a812a74b4424924e6000bb3b47ecc1d1e4dd9 not found: ID does not exist" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.944012 4869 scope.go:117] "RemoveContainer" containerID="bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8" Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.944211 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8\": container with ID starting with bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8 not found: ID does not exist" containerID="bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.944236 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8"} err="failed to get container status \"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8\": rpc error: code = NotFound desc = could not find container \"bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8\": container with ID starting with bd134a6f1682b5d3d4dbdba39f8f4c42745c62ac7f19cf4cbfd4c6b5e72da9e8 not found: ID does not exist" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.944248 4869 scope.go:117] "RemoveContainer" containerID="294ef10949f3b2561553f5e1a6547394e9a8c1ed183f46add858a1790ad88c8f" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.946582 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.947241 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="init" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947266 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="init" Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.947278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-api" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-api" Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.947306 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-log" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-log" Mar 14 09:22:46 crc kubenswrapper[4869]: E0314 09:22:46.947346 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="dnsmasq-dns" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947353 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="dnsmasq-dns" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947684 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-log" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947713 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" containerName="nova-api-api" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.947731 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" containerName="dnsmasq-dns" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.948859 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.959132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.986729 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:46 crc kubenswrapper[4869]: I0314 09:22:46.988887 4869 scope.go:117] "RemoveContainer" containerID="d67a8e08e67d1b5c0a287048b6089dc3a5d54301860c2fb3fb7cf233d15ef1f6" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.039761 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.056341 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.056430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.056481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.056616 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xq5z\" (UniqueName: \"kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.059353 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457bb75c5-8j2q5"] Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.158336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.158520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xq5z\" (UniqueName: \"kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.158582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.158626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.159163 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.163224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.176156 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.176973 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xq5z\" (UniqueName: \"kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z\") pod \"nova-api-0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.278020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.713926 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f3fe18-ffd4-4273-97eb-98e94f198608" path="/var/lib/kubelet/pods/49f3fe18-ffd4-4273-97eb-98e94f198608/volumes" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.714804 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aad5484-e5ed-42b0-9e91-89b0b5d4001c" path="/var/lib/kubelet/pods/6aad5484-e5ed-42b0-9e91-89b0b5d4001c/volumes" Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.758525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:22:47 crc kubenswrapper[4869]: I0314 09:22:47.830625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerStarted","Data":"4a42963c882c6084c4edebe5022119321005c6a5256eaf70112f74ff8bdba7f2"} Mar 14 09:22:48 crc kubenswrapper[4869]: I0314 09:22:48.853077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerStarted","Data":"9fc02af715e09253c1f31042c045b894ff45f52aeaa5a75e68fad8a3dff22beb"} Mar 14 09:22:48 crc kubenswrapper[4869]: I0314 09:22:48.853694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerStarted","Data":"babce97252f732587658552169b21081ce1d3a9e43d94690117359051f70712e"} Mar 14 09:22:48 crc kubenswrapper[4869]: I0314 09:22:48.886312 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.886281878 podStartE2EDuration="2.886281878s" podCreationTimestamp="2026-03-14 09:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:48.881130072 +0000 UTC m=+1521.853412145" watchObservedRunningTime="2026-03-14 09:22:48.886281878 +0000 UTC m=+1521.858563931" Mar 14 09:22:50 crc kubenswrapper[4869]: I0314 09:22:50.134558 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:22:52 crc kubenswrapper[4869]: I0314 09:22:52.229232 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:52 crc kubenswrapper[4869]: I0314 09:22:52.229785 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:22:52 crc kubenswrapper[4869]: I0314 09:22:52.887826 4869 generic.go:334] "Generic (PLEG): container finished" podID="868be304-0fd3-401b-8f0d-c1997da82c45" containerID="2f43876528e28de0b50e043a4ec6e9fbb8e890e50ef96f7dec215876b2468a59" exitCode=0 Mar 14 09:22:52 crc kubenswrapper[4869]: I0314 09:22:52.887876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" event={"ID":"868be304-0fd3-401b-8f0d-c1997da82c45","Type":"ContainerDied","Data":"2f43876528e28de0b50e043a4ec6e9fbb8e890e50ef96f7dec215876b2468a59"} Mar 14 09:22:53 crc kubenswrapper[4869]: I0314 09:22:53.288454 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b998" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="registry-server" probeResult="failure" output=< Mar 14 09:22:53 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:22:53 crc kubenswrapper[4869]: > Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.436562 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.623767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqm4w\" (UniqueName: \"kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w\") pod \"868be304-0fd3-401b-8f0d-c1997da82c45\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.624022 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle\") pod \"868be304-0fd3-401b-8f0d-c1997da82c45\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.624117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts\") pod \"868be304-0fd3-401b-8f0d-c1997da82c45\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.624206 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data\") pod \"868be304-0fd3-401b-8f0d-c1997da82c45\" (UID: \"868be304-0fd3-401b-8f0d-c1997da82c45\") " Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.631739 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w" (OuterVolumeSpecName: "kube-api-access-gqm4w") pod "868be304-0fd3-401b-8f0d-c1997da82c45" (UID: "868be304-0fd3-401b-8f0d-c1997da82c45"). InnerVolumeSpecName "kube-api-access-gqm4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.639346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts" (OuterVolumeSpecName: "scripts") pod "868be304-0fd3-401b-8f0d-c1997da82c45" (UID: "868be304-0fd3-401b-8f0d-c1997da82c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.653299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "868be304-0fd3-401b-8f0d-c1997da82c45" (UID: "868be304-0fd3-401b-8f0d-c1997da82c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.670190 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data" (OuterVolumeSpecName: "config-data") pod "868be304-0fd3-401b-8f0d-c1997da82c45" (UID: "868be304-0fd3-401b-8f0d-c1997da82c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.726548 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqm4w\" (UniqueName: \"kubernetes.io/projected/868be304-0fd3-401b-8f0d-c1997da82c45-kube-api-access-gqm4w\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.726888 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.726897 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.726905 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/868be304-0fd3-401b-8f0d-c1997da82c45-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.909638 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" event={"ID":"868be304-0fd3-401b-8f0d-c1997da82c45","Type":"ContainerDied","Data":"2232991e0259a79da4fd58b27da218f39d1ca7e9a62d65bd85745bc61a0ec56b"} Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.909682 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2232991e0259a79da4fd58b27da218f39d1ca7e9a62d65bd85745bc61a0ec56b" Mar 14 09:22:54 crc kubenswrapper[4869]: I0314 09:22:54.909709 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-k8p4c" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.000602 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:22:55 crc kubenswrapper[4869]: E0314 09:22:55.001189 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868be304-0fd3-401b-8f0d-c1997da82c45" containerName="nova-cell1-conductor-db-sync" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.001216 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="868be304-0fd3-401b-8f0d-c1997da82c45" containerName="nova-cell1-conductor-db-sync" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.001542 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="868be304-0fd3-401b-8f0d-c1997da82c45" containerName="nova-cell1-conductor-db-sync" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.002530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.004735 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.012339 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.033069 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.033430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.033573 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtqw\" (UniqueName: \"kubernetes.io/projected/da23b22b-973c-422a-8e5a-3ce03f11c458-kube-api-access-7vtqw\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.135072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.135143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtqw\" (UniqueName: \"kubernetes.io/projected/da23b22b-973c-422a-8e5a-3ce03f11c458-kube-api-access-7vtqw\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.135234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.136306 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.140105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.141314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da23b22b-973c-422a-8e5a-3ce03f11c458-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.160160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtqw\" (UniqueName: \"kubernetes.io/projected/da23b22b-973c-422a-8e5a-3ce03f11c458-kube-api-access-7vtqw\") pod \"nova-cell1-conductor-0\" (UID: \"da23b22b-973c-422a-8e5a-3ce03f11c458\") " pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.180961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.322999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.896843 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 14 09:22:55 crc kubenswrapper[4869]: I0314 09:22:55.966196 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.704917 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:22:56 crc kubenswrapper[4869]: E0314 09:22:56.705345 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.705434 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:22:56 crc kubenswrapper[4869]: E0314 09:22:56.705652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.930180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"da23b22b-973c-422a-8e5a-3ce03f11c458","Type":"ContainerStarted","Data":"a12faedf2469ad16300b8a3ec86be3f3affc76d6364f7973b87770d428d921c2"} Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.930227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"da23b22b-973c-422a-8e5a-3ce03f11c458","Type":"ContainerStarted","Data":"cbdddecdfbab7de605d5a78b92ea88546cc0a2c71e2a70a64819cf15ca4c045b"} Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.930304 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 14 09:22:56 crc kubenswrapper[4869]: I0314 09:22:56.950448 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.950426423 podStartE2EDuration="2.950426423s" podCreationTimestamp="2026-03-14 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:22:56.943759078 +0000 UTC m=+1529.916041141" watchObservedRunningTime="2026-03-14 09:22:56.950426423 +0000 UTC m=+1529.922708496" Mar 14 09:22:57 crc kubenswrapper[4869]: I0314 09:22:57.278864 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:57 crc kubenswrapper[4869]: I0314 09:22:57.279213 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:22:58 crc kubenswrapper[4869]: I0314 09:22:58.360967 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:22:58 crc kubenswrapper[4869]: I0314 09:22:58.361070 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:02 crc kubenswrapper[4869]: I0314 09:23:02.183018 4869 scope.go:117] "RemoveContainer" containerID="6609ae58ffeb67243086dca76b9cb01312dbbf3ad47af6125fa3c4518555ba04" Mar 14 09:23:02 crc kubenswrapper[4869]: I0314 09:23:02.306813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:23:02 crc kubenswrapper[4869]: I0314 09:23:02.366905 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:23:03 crc kubenswrapper[4869]: I0314 09:23:03.081283 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.004959 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8b998" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="registry-server" containerID="cri-o://1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9" gracePeriod=2 Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.555480 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.652896 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g24rd\" (UniqueName: \"kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd\") pod \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.653065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content\") pod \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.653214 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities\") pod \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\" (UID: \"2f8bfcd6-918b-4007-a900-e44c8628c2ed\") " Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.654693 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities" (OuterVolumeSpecName: "utilities") pod "2f8bfcd6-918b-4007-a900-e44c8628c2ed" (UID: "2f8bfcd6-918b-4007-a900-e44c8628c2ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.669740 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd" (OuterVolumeSpecName: "kube-api-access-g24rd") pod "2f8bfcd6-918b-4007-a900-e44c8628c2ed" (UID: "2f8bfcd6-918b-4007-a900-e44c8628c2ed"). InnerVolumeSpecName "kube-api-access-g24rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.755171 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g24rd\" (UniqueName: \"kubernetes.io/projected/2f8bfcd6-918b-4007-a900-e44c8628c2ed-kube-api-access-g24rd\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.755200 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.783739 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f8bfcd6-918b-4007-a900-e44c8628c2ed" (UID: "2f8bfcd6-918b-4007-a900-e44c8628c2ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:04 crc kubenswrapper[4869]: I0314 09:23:04.859678 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f8bfcd6-918b-4007-a900-e44c8628c2ed-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.035656 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerID="1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9" exitCode=0 Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.035733 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b998" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.035719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerDied","Data":"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9"} Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.037299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b998" event={"ID":"2f8bfcd6-918b-4007-a900-e44c8628c2ed","Type":"ContainerDied","Data":"84b33f42fa53c34c496e2865ee33a578dfad4a7a25481b5521d1e320cfff8cd5"} Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.037328 4869 scope.go:117] "RemoveContainer" containerID="1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.082736 4869 scope.go:117] "RemoveContainer" containerID="dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.086604 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.097580 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8b998"] Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.121892 4869 scope.go:117] "RemoveContainer" containerID="f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.194172 4869 scope.go:117] "RemoveContainer" containerID="1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9" Mar 14 09:23:05 crc kubenswrapper[4869]: E0314 09:23:05.194717 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9\": container with ID starting with 1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9 not found: ID does not exist" containerID="1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.194750 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9"} err="failed to get container status \"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9\": rpc error: code = NotFound desc = could not find container \"1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9\": container with ID starting with 1d4355ddc66a536e6c79aa5f19d27121ea3ec8fd1c93cec24efb4a70b1e8aac9 not found: ID does not exist" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.194775 4869 scope.go:117] "RemoveContainer" containerID="dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508" Mar 14 09:23:05 crc kubenswrapper[4869]: E0314 09:23:05.194964 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508\": container with ID starting with dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508 not found: ID does not exist" containerID="dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.194994 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508"} err="failed to get container status \"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508\": rpc error: code = NotFound desc = could not find container \"dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508\": container with ID starting with dbd237deff56c76b20f45c85bc1d5575bc1a0a7be9628346797529da6f556508 not found: ID does not exist" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.195017 4869 scope.go:117] "RemoveContainer" containerID="f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc" Mar 14 09:23:05 crc kubenswrapper[4869]: E0314 09:23:05.195322 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc\": container with ID starting with f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc not found: ID does not exist" containerID="f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.195350 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc"} err="failed to get container status \"f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc\": rpc error: code = NotFound desc = could not find container \"f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc\": container with ID starting with f91ec61fe9839dc42db3726beb7ea2e5b3eaa99cffa62726cd452662fdfa56fc not found: ID does not exist" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.353603 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 14 09:23:05 crc kubenswrapper[4869]: I0314 09:23:05.725640 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" path="/var/lib/kubelet/pods/2f8bfcd6-918b-4007-a900-e44c8628c2ed/volumes" Mar 14 09:23:07 crc kubenswrapper[4869]: I0314 09:23:07.285407 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:07 crc kubenswrapper[4869]: I0314 09:23:07.285990 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:23:07 crc kubenswrapper[4869]: I0314 09:23:07.287609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:07 crc kubenswrapper[4869]: I0314 09:23:07.292459 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:07 crc kubenswrapper[4869]: I0314 09:23:07.717600 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:23:07 crc kubenswrapper[4869]: E0314 09:23:07.717988 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.070595 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.097873 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.277274 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64674776dc-mx7wm"] Mar 14 09:23:08 crc kubenswrapper[4869]: E0314 09:23:08.277866 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="extract-utilities" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.277895 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="extract-utilities" Mar 14 09:23:08 crc kubenswrapper[4869]: E0314 09:23:08.277931 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="registry-server" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.277941 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="registry-server" Mar 14 09:23:08 crc kubenswrapper[4869]: E0314 09:23:08.277973 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="extract-content" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.277981 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="extract-content" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.278226 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8bfcd6-918b-4007-a900-e44c8628c2ed" containerName="registry-server" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.279705 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.311106 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64674776dc-mx7wm"] Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.344920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-nb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.345083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmqx\" (UniqueName: \"kubernetes.io/projected/1d883534-96aa-48f1-97bb-01a43f7634f4-kube-api-access-zfmqx\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.345168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-svc\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.345283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-sb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.345375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-config\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.345480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-swift-storage-0\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.447274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-sb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-config\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-swift-storage-0\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-nb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmqx\" (UniqueName: \"kubernetes.io/projected/1d883534-96aa-48f1-97bb-01a43f7634f4-kube-api-access-zfmqx\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-svc\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.449309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-svc\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.448279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-sb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.449934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-config\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.450536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-dns-swift-storage-0\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.451127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d883534-96aa-48f1-97bb-01a43f7634f4-ovsdbserver-nb\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.484341 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmqx\" (UniqueName: \"kubernetes.io/projected/1d883534-96aa-48f1-97bb-01a43f7634f4-kube-api-access-zfmqx\") pod \"dnsmasq-dns-64674776dc-mx7wm\" (UID: \"1d883534-96aa-48f1-97bb-01a43f7634f4\") " pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:08 crc kubenswrapper[4869]: I0314 09:23:08.623366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.094501 4869 generic.go:334] "Generic (PLEG): container finished" podID="62664fd0-e198-48ae-add2-1c1918f1a697" containerID="582e654bf3c9642a52dfe82f2fd2ee14bb5bba770cc7cb126de42d4c2a23e581" exitCode=137 Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.094784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerDied","Data":"582e654bf3c9642a52dfe82f2fd2ee14bb5bba770cc7cb126de42d4c2a23e581"} Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.101945 4869 generic.go:334] "Generic (PLEG): container finished" podID="1303f846-c5e7-483c-963d-00ba423883b1" containerID="07c7be6f206163639b2b94c9a261be6b20e585477b735d3f95d562505aebe380" exitCode=137 Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.102008 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1303f846-c5e7-483c-963d-00ba423883b1","Type":"ContainerDied","Data":"07c7be6f206163639b2b94c9a261be6b20e585477b735d3f95d562505aebe380"} Mar 14 09:23:09 crc kubenswrapper[4869]: W0314 09:23:09.276193 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d883534_96aa_48f1_97bb_01a43f7634f4.slice/crio-f177dc82bfb8c7b37613978c857fde932dd55ab1686b551cc32ec8012f50217a WatchSource:0}: Error finding container f177dc82bfb8c7b37613978c857fde932dd55ab1686b551cc32ec8012f50217a: Status 404 returned error can't find the container with id f177dc82bfb8c7b37613978c857fde932dd55ab1686b551cc32ec8012f50217a Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.278432 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64674776dc-mx7wm"] Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.341443 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.425817 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.476922 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wd64\" (UniqueName: \"kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64\") pod \"62664fd0-e198-48ae-add2-1c1918f1a697\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.479179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle\") pod \"62664fd0-e198-48ae-add2-1c1918f1a697\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.479651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs\") pod \"62664fd0-e198-48ae-add2-1c1918f1a697\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.479985 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data\") pod \"62664fd0-e198-48ae-add2-1c1918f1a697\" (UID: \"62664fd0-e198-48ae-add2-1c1918f1a697\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.480067 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs" (OuterVolumeSpecName: "logs") pod "62664fd0-e198-48ae-add2-1c1918f1a697" (UID: "62664fd0-e198-48ae-add2-1c1918f1a697"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.481009 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62664fd0-e198-48ae-add2-1c1918f1a697-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.484467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64" (OuterVolumeSpecName: "kube-api-access-6wd64") pod "62664fd0-e198-48ae-add2-1c1918f1a697" (UID: "62664fd0-e198-48ae-add2-1c1918f1a697"). InnerVolumeSpecName "kube-api-access-6wd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.538145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data" (OuterVolumeSpecName: "config-data") pod "62664fd0-e198-48ae-add2-1c1918f1a697" (UID: "62664fd0-e198-48ae-add2-1c1918f1a697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.542683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62664fd0-e198-48ae-add2-1c1918f1a697" (UID: "62664fd0-e198-48ae-add2-1c1918f1a697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.582976 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k725b\" (UniqueName: \"kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b\") pod \"1303f846-c5e7-483c-963d-00ba423883b1\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.583968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle\") pod \"1303f846-c5e7-483c-963d-00ba423883b1\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.584067 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data\") pod \"1303f846-c5e7-483c-963d-00ba423883b1\" (UID: \"1303f846-c5e7-483c-963d-00ba423883b1\") " Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.584870 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wd64\" (UniqueName: \"kubernetes.io/projected/62664fd0-e198-48ae-add2-1c1918f1a697-kube-api-access-6wd64\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.586335 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.586903 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62664fd0-e198-48ae-add2-1c1918f1a697-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.590935 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b" (OuterVolumeSpecName: "kube-api-access-k725b") pod "1303f846-c5e7-483c-963d-00ba423883b1" (UID: "1303f846-c5e7-483c-963d-00ba423883b1"). InnerVolumeSpecName "kube-api-access-k725b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.616018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data" (OuterVolumeSpecName: "config-data") pod "1303f846-c5e7-483c-963d-00ba423883b1" (UID: "1303f846-c5e7-483c-963d-00ba423883b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.619524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1303f846-c5e7-483c-963d-00ba423883b1" (UID: "1303f846-c5e7-483c-963d-00ba423883b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.688627 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k725b\" (UniqueName: \"kubernetes.io/projected/1303f846-c5e7-483c-963d-00ba423883b1-kube-api-access-k725b\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.688671 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:09 crc kubenswrapper[4869]: I0314 09:23:09.688684 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1303f846-c5e7-483c-963d-00ba423883b1-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.113376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1303f846-c5e7-483c-963d-00ba423883b1","Type":"ContainerDied","Data":"8bcd350b677ff8b9289ab8e725ffc26bd3d34439bfb016766465699b178ae67d"} Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.113435 4869 scope.go:117] "RemoveContainer" containerID="07c7be6f206163639b2b94c9a261be6b20e585477b735d3f95d562505aebe380" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.113434 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.115793 4869 generic.go:334] "Generic (PLEG): container finished" podID="1d883534-96aa-48f1-97bb-01a43f7634f4" containerID="5a43950701ffabc0c9c42dafce6819837d184fcd40208179b4501ad66122a0f2" exitCode=0 Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.115853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" event={"ID":"1d883534-96aa-48f1-97bb-01a43f7634f4","Type":"ContainerDied","Data":"5a43950701ffabc0c9c42dafce6819837d184fcd40208179b4501ad66122a0f2"} Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.115878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" event={"ID":"1d883534-96aa-48f1-97bb-01a43f7634f4","Type":"ContainerStarted","Data":"f177dc82bfb8c7b37613978c857fde932dd55ab1686b551cc32ec8012f50217a"} Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.132089 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.132363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"62664fd0-e198-48ae-add2-1c1918f1a697","Type":"ContainerDied","Data":"83d941055f75fdf61e640c494e18b314de25c66a3dd4cc165e8c62b6df976128"} Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.190965 4869 scope.go:117] "RemoveContainer" containerID="582e654bf3c9642a52dfe82f2fd2ee14bb5bba770cc7cb126de42d4c2a23e581" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.218707 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.243000 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.258684 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: E0314 09:23:10.259214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1303f846-c5e7-483c-963d-00ba423883b1" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259230 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1303f846-c5e7-483c-963d-00ba423883b1" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:23:10 crc kubenswrapper[4869]: E0314 09:23:10.259268 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-metadata" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259276 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-metadata" Mar 14 09:23:10 crc kubenswrapper[4869]: E0314 09:23:10.259298 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-log" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259306 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-log" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259564 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-metadata" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259586 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1303f846-c5e7-483c-963d-00ba423883b1" containerName="nova-cell1-novncproxy-novncproxy" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.259630 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" containerName="nova-metadata-log" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.260629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.266843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.266983 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.267224 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.291000 4869 scope.go:117] "RemoveContainer" containerID="ac083ae7b59cc20181999c7086b34921b8a9fd679f0cd4da835f84294ad3c8a7" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.299078 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.301187 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.301267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nz5\" (UniqueName: \"kubernetes.io/projected/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-kube-api-access-x2nz5\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.301289 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.301313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.301327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.313940 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.331884 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.335865 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.337591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.340619 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.340762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.405000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.405121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nz5\" (UniqueName: \"kubernetes.io/projected/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-kube-api-access-x2nz5\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.405159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.405184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.405205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.410559 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.416987 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.426169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.428390 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.428495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.429066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nz5\" (UniqueName: \"kubernetes.io/projected/cccb9f3d-777d-41b6-8a9e-60e91b9fe556-kube-api-access-x2nz5\") pod \"nova-cell1-novncproxy-0\" (UID: \"cccb9f3d-777d-41b6-8a9e-60e91b9fe556\") " pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.507847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.507993 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.508025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.508095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.508180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2d4\" (UniqueName: \"kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.592748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.610481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.610538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.610597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.610657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf2d4\" (UniqueName: \"kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.610698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.611602 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.617760 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.619129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.626222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.654097 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf2d4\" (UniqueName: \"kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4\") pod \"nova-metadata-0\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " pod="openstack/nova-metadata-0" Mar 14 09:23:10 crc kubenswrapper[4869]: I0314 09:23:10.700486 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.069441 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.155364 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" event={"ID":"1d883534-96aa-48f1-97bb-01a43f7634f4","Type":"ContainerStarted","Data":"8120bdb31fa186b054f68b2e554623b57dc312464fa6cb7d10cd30e3f965dffe"} Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.155477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.156795 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-log" containerID="cri-o://babce97252f732587658552169b21081ce1d3a9e43d94690117359051f70712e" gracePeriod=30 Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.156919 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-api" containerID="cri-o://9fc02af715e09253c1f31042c045b894ff45f52aeaa5a75e68fad8a3dff22beb" gracePeriod=30 Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.204278 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" podStartSLOduration=3.204258363 podStartE2EDuration="3.204258363s" podCreationTimestamp="2026-03-14 09:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:11.18141844 +0000 UTC m=+1544.153700503" watchObservedRunningTime="2026-03-14 09:23:11.204258363 +0000 UTC m=+1544.176540416" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.241266 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.328385 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.705284 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:23:11 crc kubenswrapper[4869]: E0314 09:23:11.706662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.718046 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1303f846-c5e7-483c-963d-00ba423883b1" path="/var/lib/kubelet/pods/1303f846-c5e7-483c-963d-00ba423883b1/volumes" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.719856 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62664fd0-e198-48ae-add2-1c1918f1a697" path="/var/lib/kubelet/pods/62664fd0-e198-48ae-add2-1c1918f1a697/volumes" Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.990950 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.992011 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="sg-core" containerID="cri-o://f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2" gracePeriod=30 Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.992042 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="proxy-httpd" containerID="cri-o://76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d" gracePeriod=30 Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.992075 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-notification-agent" containerID="cri-o://cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60" gracePeriod=30 Mar 14 09:23:11 crc kubenswrapper[4869]: I0314 09:23:11.991903 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-central-agent" containerID="cri-o://28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df" gracePeriod=30 Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.167827 4869 generic.go:334] "Generic (PLEG): container finished" podID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerID="f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2" exitCode=2 Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.167883 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerDied","Data":"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.169936 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerStarted","Data":"1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.169961 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerStarted","Data":"7913851fa6ce4bd6f4ba472822f210e9a0d5d4c63a7997cb4986761b88ac1780"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.169970 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerStarted","Data":"7dc601b9ef94d1857a8a7e5babe8ea3a6cfbc7e6d7e24e885800d12df861e208"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.172850 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cccb9f3d-777d-41b6-8a9e-60e91b9fe556","Type":"ContainerStarted","Data":"e7302b0b71581b0a8d1248f5e1601077ab8ff315121a47b170321e952b6099aa"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.172879 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cccb9f3d-777d-41b6-8a9e-60e91b9fe556","Type":"ContainerStarted","Data":"0f1f8a02c091acd964ad6022aeff95dfa6f97f768406914c7ca75756558541e2"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.183048 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerID="9fc02af715e09253c1f31042c045b894ff45f52aeaa5a75e68fad8a3dff22beb" exitCode=0 Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.183080 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerID="babce97252f732587658552169b21081ce1d3a9e43d94690117359051f70712e" exitCode=143 Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.184151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerDied","Data":"9fc02af715e09253c1f31042c045b894ff45f52aeaa5a75e68fad8a3dff22beb"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.184181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerDied","Data":"babce97252f732587658552169b21081ce1d3a9e43d94690117359051f70712e"} Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.195868 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.195852313 podStartE2EDuration="2.195852313s" podCreationTimestamp="2026-03-14 09:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:12.19126401 +0000 UTC m=+1545.163546063" watchObservedRunningTime="2026-03-14 09:23:12.195852313 +0000 UTC m=+1545.168134366" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.242018 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.241991481 podStartE2EDuration="2.241991481s" podCreationTimestamp="2026-03-14 09:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:12.227539515 +0000 UTC m=+1545.199821568" watchObservedRunningTime="2026-03-14 09:23:12.241991481 +0000 UTC m=+1545.214273544" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.461396 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.661660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle\") pod \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.661704 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs\") pod \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.661739 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xq5z\" (UniqueName: \"kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z\") pod \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.661855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data\") pod \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\" (UID: \"ff7e8712-8536-44b5-8ad8-f55e4907a6c0\") " Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.663162 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs" (OuterVolumeSpecName: "logs") pod "ff7e8712-8536-44b5-8ad8-f55e4907a6c0" (UID: "ff7e8712-8536-44b5-8ad8-f55e4907a6c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.669435 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z" (OuterVolumeSpecName: "kube-api-access-4xq5z") pod "ff7e8712-8536-44b5-8ad8-f55e4907a6c0" (UID: "ff7e8712-8536-44b5-8ad8-f55e4907a6c0"). InnerVolumeSpecName "kube-api-access-4xq5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.703721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff7e8712-8536-44b5-8ad8-f55e4907a6c0" (UID: "ff7e8712-8536-44b5-8ad8-f55e4907a6c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.706033 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data" (OuterVolumeSpecName: "config-data") pod "ff7e8712-8536-44b5-8ad8-f55e4907a6c0" (UID: "ff7e8712-8536-44b5-8ad8-f55e4907a6c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.770040 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.770074 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.770085 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xq5z\" (UniqueName: \"kubernetes.io/projected/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-kube-api-access-4xq5z\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:12 crc kubenswrapper[4869]: I0314 09:23:12.770096 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7e8712-8536-44b5-8ad8-f55e4907a6c0-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.195944 4869 generic.go:334] "Generic (PLEG): container finished" podID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerID="76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d" exitCode=0 Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.195978 4869 generic.go:334] "Generic (PLEG): container finished" podID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerID="28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df" exitCode=0 Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.195986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerDied","Data":"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d"} Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.196053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerDied","Data":"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df"} Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.198407 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7e8712-8536-44b5-8ad8-f55e4907a6c0","Type":"ContainerDied","Data":"4a42963c882c6084c4edebe5022119321005c6a5256eaf70112f74ff8bdba7f2"} Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.198474 4869 scope.go:117] "RemoveContainer" containerID="9fc02af715e09253c1f31042c045b894ff45f52aeaa5a75e68fad8a3dff22beb" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.198622 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.253374 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.268687 4869 scope.go:117] "RemoveContainer" containerID="babce97252f732587658552169b21081ce1d3a9e43d94690117359051f70712e" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.269300 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.295236 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:13 crc kubenswrapper[4869]: E0314 09:23:13.295710 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-api" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.295727 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-api" Mar 14 09:23:13 crc kubenswrapper[4869]: E0314 09:23:13.295738 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-log" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.295745 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-log" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.295929 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-api" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.295952 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" containerName="nova-api-log" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.297002 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.301899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.315220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.323340 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.347975 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.487466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.487589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bn4l\" (UniqueName: \"kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.487615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.487961 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.488108 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.488155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590190 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590264 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590442 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bn4l\" (UniqueName: \"kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.590983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.594463 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.594925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.598745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.605236 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.610372 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bn4l\" (UniqueName: \"kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l\") pod \"nova-api-0\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.616718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:13 crc kubenswrapper[4869]: I0314 09:23:13.715858 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7e8712-8536-44b5-8ad8-f55e4907a6c0" path="/var/lib/kubelet/pods/ff7e8712-8536-44b5-8ad8-f55e4907a6c0/volumes" Mar 14 09:23:14 crc kubenswrapper[4869]: W0314 09:23:14.142779 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a38738_41fd_46a0_9a57_161e783f9039.slice/crio-9cb7108392e9059809600b7eb87f51349f2b5cba38a5e564043a765e984832e4 WatchSource:0}: Error finding container 9cb7108392e9059809600b7eb87f51349f2b5cba38a5e564043a765e984832e4: Status 404 returned error can't find the container with id 9cb7108392e9059809600b7eb87f51349f2b5cba38a5e564043a765e984832e4 Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.156199 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.212441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerStarted","Data":"9cb7108392e9059809600b7eb87f51349f2b5cba38a5e564043a765e984832e4"} Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.787988 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933410 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933478 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvj7\" (UniqueName: \"kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933614 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933780 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.933878 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data\") pod \"f704d4fe-9b14-40c2-b757-90db0c351d7b\" (UID: \"f704d4fe-9b14-40c2-b757-90db0c351d7b\") " Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.934143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.934388 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.935778 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.935825 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f704d4fe-9b14-40c2-b757-90db0c351d7b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.941880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts" (OuterVolumeSpecName: "scripts") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.941903 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7" (OuterVolumeSpecName: "kube-api-access-dkvj7") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "kube-api-access-dkvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:14 crc kubenswrapper[4869]: I0314 09:23:14.989394 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.002988 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.037033 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.037058 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvj7\" (UniqueName: \"kubernetes.io/projected/f704d4fe-9b14-40c2-b757-90db0c351d7b-kube-api-access-dkvj7\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.037071 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.037080 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.038866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.055668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data" (OuterVolumeSpecName: "config-data") pod "f704d4fe-9b14-40c2-b757-90db0c351d7b" (UID: "f704d4fe-9b14-40c2-b757-90db0c351d7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.144643 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.145138 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f704d4fe-9b14-40c2-b757-90db0c351d7b-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.235186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerStarted","Data":"fb02c5a2afd45e3966568287ae39cc48081429e7cf350279bb85e6f026d99ab5"} Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.235233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerStarted","Data":"f60c61952a35954c7fc82af2f43f4d99846ecbcea2f56d89e01080bd136c64be"} Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.246325 4869 generic.go:334] "Generic (PLEG): container finished" podID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerID="cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60" exitCode=0 Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.246387 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.246386 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerDied","Data":"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60"} Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.246442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f704d4fe-9b14-40c2-b757-90db0c351d7b","Type":"ContainerDied","Data":"c0e7356ec38bbfa6a85d0772b5f20f8aadf4faa04191ef1d3843832bca4b6a48"} Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.246464 4869 scope.go:117] "RemoveContainer" containerID="76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.270877 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.270857147 podStartE2EDuration="2.270857147s" podCreationTimestamp="2026-03-14 09:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:15.255480238 +0000 UTC m=+1548.227762311" watchObservedRunningTime="2026-03-14 09:23:15.270857147 +0000 UTC m=+1548.243139200" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.294413 4869 scope.go:117] "RemoveContainer" containerID="f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.294627 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.317099 4869 scope.go:117] "RemoveContainer" containerID="cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.320862 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.337277 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.338117 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-central-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338138 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-central-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.338179 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="proxy-httpd" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338188 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="proxy-httpd" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.338206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="sg-core" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338213 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="sg-core" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.338238 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-notification-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-notification-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338524 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-notification-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338558 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="ceilometer-central-agent" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338570 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="proxy-httpd" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.338585 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" containerName="sg-core" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.340979 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.344170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.344404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.344417 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.353985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9b9\" (UniqueName: \"kubernetes.io/projected/c7abd091-5889-4e4e-8f12-24f0bcba5262-kube-api-access-6k9b9\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.354351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.354469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-scripts\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.354658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.357843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.357875 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-config-data\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.357908 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-run-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.357941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-log-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.364141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.386013 4869 scope.go:117] "RemoveContainer" containerID="28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.410797 4869 scope.go:117] "RemoveContainer" containerID="76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.413701 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d\": container with ID starting with 76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d not found: ID does not exist" containerID="76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.413727 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d"} err="failed to get container status \"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d\": rpc error: code = NotFound desc = could not find container \"76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d\": container with ID starting with 76fb72e7aff406dc367a2fe62f9ca1c867fc75255d9e8edb034da82238bd707d not found: ID does not exist" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.413746 4869 scope.go:117] "RemoveContainer" containerID="f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.414081 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2\": container with ID starting with f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2 not found: ID does not exist" containerID="f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.414133 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2"} err="failed to get container status \"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2\": rpc error: code = NotFound desc = could not find container \"f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2\": container with ID starting with f1cad89b2cecd2f2a26d85b816a12d780cf7cd82473dcfafdcc09a23e321b4b2 not found: ID does not exist" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.414173 4869 scope.go:117] "RemoveContainer" containerID="cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.414487 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60\": container with ID starting with cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60 not found: ID does not exist" containerID="cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.414604 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60"} err="failed to get container status \"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60\": rpc error: code = NotFound desc = could not find container \"cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60\": container with ID starting with cf1367170addbc64401b614540b1e91ff1d97c77b7582428754b9d8f0be45f60 not found: ID does not exist" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.414645 4869 scope.go:117] "RemoveContainer" containerID="28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df" Mar 14 09:23:15 crc kubenswrapper[4869]: E0314 09:23:15.414911 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df\": container with ID starting with 28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df not found: ID does not exist" containerID="28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.414935 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df"} err="failed to get container status \"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df\": rpc error: code = NotFound desc = could not find container \"28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df\": container with ID starting with 28af552275c9b9bc8582377adb1774af3768856351b266c1db56885ad81eb8df not found: ID does not exist" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-config-data\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-run-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460340 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-log-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9b9\" (UniqueName: \"kubernetes.io/projected/c7abd091-5889-4e4e-8f12-24f0bcba5262-kube-api-access-6k9b9\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-scripts\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.460562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.461008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-run-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.461262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7abd091-5889-4e4e-8f12-24f0bcba5262-log-httpd\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.464334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.465816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.466126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-config-data\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.469016 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.471279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7abd091-5889-4e4e-8f12-24f0bcba5262-scripts\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.484905 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9b9\" (UniqueName: \"kubernetes.io/projected/c7abd091-5889-4e4e-8f12-24f0bcba5262-kube-api-access-6k9b9\") pod \"ceilometer-0\" (UID: \"c7abd091-5889-4e4e-8f12-24f0bcba5262\") " pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.594156 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.662074 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.701401 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.701460 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:23:15 crc kubenswrapper[4869]: I0314 09:23:15.716245 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f704d4fe-9b14-40c2-b757-90db0c351d7b" path="/var/lib/kubelet/pods/f704d4fe-9b14-40c2-b757-90db0c351d7b/volumes" Mar 14 09:23:16 crc kubenswrapper[4869]: I0314 09:23:16.274475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 14 09:23:16 crc kubenswrapper[4869]: W0314 09:23:16.283322 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7abd091_5889_4e4e_8f12_24f0bcba5262.slice/crio-46ff1ea275ff6b03959b4b31f4744933529978fac38d0c39c97fecf44934984b WatchSource:0}: Error finding container 46ff1ea275ff6b03959b4b31f4744933529978fac38d0c39c97fecf44934984b: Status 404 returned error can't find the container with id 46ff1ea275ff6b03959b4b31f4744933529978fac38d0c39c97fecf44934984b Mar 14 09:23:16 crc kubenswrapper[4869]: I0314 09:23:16.286365 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:23:17 crc kubenswrapper[4869]: I0314 09:23:17.283338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7abd091-5889-4e4e-8f12-24f0bcba5262","Type":"ContainerStarted","Data":"a8672e8d0edaec23a9544ef4af3eb92cd1975f59c81a4fb4dc340276e491cdbe"} Mar 14 09:23:17 crc kubenswrapper[4869]: I0314 09:23:17.284998 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7abd091-5889-4e4e-8f12-24f0bcba5262","Type":"ContainerStarted","Data":"5897ec25a2bb3806a4c6d6a9e476c27e469a2bbba2ebf57c14bbf52aecbb32d6"} Mar 14 09:23:17 crc kubenswrapper[4869]: I0314 09:23:17.285127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7abd091-5889-4e4e-8f12-24f0bcba5262","Type":"ContainerStarted","Data":"46ff1ea275ff6b03959b4b31f4744933529978fac38d0c39c97fecf44934984b"} Mar 14 09:23:18 crc kubenswrapper[4869]: I0314 09:23:18.293191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7abd091-5889-4e4e-8f12-24f0bcba5262","Type":"ContainerStarted","Data":"c4b40df437f3242b23189e8c021f726680f9839980706b8e3a169790db0909b5"} Mar 14 09:23:18 crc kubenswrapper[4869]: I0314 09:23:18.627623 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64674776dc-mx7wm" Mar 14 09:23:18 crc kubenswrapper[4869]: I0314 09:23:18.695659 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:23:18 crc kubenswrapper[4869]: I0314 09:23:18.695923 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="dnsmasq-dns" containerID="cri-o://173939437de19f433a974f24f049841bfaa51d521435a9b1a08c89e216a07807" gracePeriod=10 Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.314173 4869 generic.go:334] "Generic (PLEG): container finished" podID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerID="173939437de19f433a974f24f049841bfaa51d521435a9b1a08c89e216a07807" exitCode=0 Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.314254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" event={"ID":"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45","Type":"ContainerDied","Data":"173939437de19f433a974f24f049841bfaa51d521435a9b1a08c89e216a07807"} Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.370796 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.494830 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.494956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.495064 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.495111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.495167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.495231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jdl\" (UniqueName: \"kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl\") pod \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\" (UID: \"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45\") " Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.500806 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl" (OuterVolumeSpecName: "kube-api-access-88jdl") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "kube-api-access-88jdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.580972 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.595083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config" (OuterVolumeSpecName: "config") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.595622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.597879 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.597902 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-config\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.597931 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jdl\" (UniqueName: \"kubernetes.io/projected/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-kube-api-access-88jdl\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.597941 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.608343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.611112 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" (UID: "6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.699652 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:19 crc kubenswrapper[4869]: I0314 09:23:19.699690 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.324260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" event={"ID":"6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45","Type":"ContainerDied","Data":"7af3acecc5057866a1376756098d80f3946c89b937836eefcce94322ae9a7537"} Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.324285 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.325249 4869 scope.go:117] "RemoveContainer" containerID="173939437de19f433a974f24f049841bfaa51d521435a9b1a08c89e216a07807" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.327987 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7abd091-5889-4e4e-8f12-24f0bcba5262","Type":"ContainerStarted","Data":"0ee43135c521821532ff4babdc9ac4f00473f5d101e5bd78b3e284632d41d409"} Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.329116 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.351973 4869 scope.go:117] "RemoveContainer" containerID="9fc48c4d9645c78ee2ad30c8675604fd6b17d956a4d55a4b45d723ec165e633a" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.389792 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.473455111 podStartE2EDuration="5.389767699s" podCreationTimestamp="2026-03-14 09:23:15 +0000 UTC" firstStartedPulling="2026-03-14 09:23:16.286093141 +0000 UTC m=+1549.258375194" lastFinishedPulling="2026-03-14 09:23:19.202405729 +0000 UTC m=+1552.174687782" observedRunningTime="2026-03-14 09:23:20.351999917 +0000 UTC m=+1553.324281990" watchObservedRunningTime="2026-03-14 09:23:20.389767699 +0000 UTC m=+1553.362049762" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.422460 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.437218 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784c8c5dcf-6dcv7"] Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.593348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.627606 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.701817 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:23:20 crc kubenswrapper[4869]: I0314 09:23:20.701859 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.365790 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.552413 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xjwxx"] Mar 14 09:23:21 crc kubenswrapper[4869]: E0314 09:23:21.553109 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="dnsmasq-dns" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.553179 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="dnsmasq-dns" Mar 14 09:23:21 crc kubenswrapper[4869]: E0314 09:23:21.553240 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="init" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.553288 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="init" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.553579 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="dnsmasq-dns" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.554336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.556740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.557162 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.581576 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjwxx"] Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.638601 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.638672 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44gx\" (UniqueName: \"kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.638793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.638870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.740560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.740900 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j44gx\" (UniqueName: \"kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.741018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.741104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.748568 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" path="/var/lib/kubelet/pods/6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45/volumes" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.749725 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.750038 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.750260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.753013 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.761580 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.769062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j44gx\" (UniqueName: \"kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx\") pod \"nova-cell1-cell-mapping-xjwxx\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:21 crc kubenswrapper[4869]: I0314 09:23:21.870997 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:22 crc kubenswrapper[4869]: I0314 09:23:22.485240 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjwxx"] Mar 14 09:23:22 crc kubenswrapper[4869]: I0314 09:23:22.704129 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:23:22 crc kubenswrapper[4869]: E0314 09:23:22.704671 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:23:23 crc kubenswrapper[4869]: I0314 09:23:23.370260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjwxx" event={"ID":"c0cf5e02-e6ac-4c54-a514-948485fd56fb","Type":"ContainerStarted","Data":"e49c20e0db4363df7ab9b81c9e721d25ed583852678ce43bf2ff0a329c713947"} Mar 14 09:23:23 crc kubenswrapper[4869]: I0314 09:23:23.370854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjwxx" event={"ID":"c0cf5e02-e6ac-4c54-a514-948485fd56fb","Type":"ContainerStarted","Data":"9e2a174b67670088301a266fb0ea7e40ffcc6e84825d1a98592081478326614c"} Mar 14 09:23:23 crc kubenswrapper[4869]: I0314 09:23:23.412004 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xjwxx" podStartSLOduration=2.41198269 podStartE2EDuration="2.41198269s" podCreationTimestamp="2026-03-14 09:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:23.398827866 +0000 UTC m=+1556.371109919" watchObservedRunningTime="2026-03-14 09:23:23.41198269 +0000 UTC m=+1556.384264743" Mar 14 09:23:23 crc kubenswrapper[4869]: I0314 09:23:23.617747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:23:23 crc kubenswrapper[4869]: I0314 09:23:23.618810 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:23:24 crc kubenswrapper[4869]: I0314 09:23:24.357304 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784c8c5dcf-6dcv7" podUID="6bb3840f-f6a1-4d1e-b6bc-7e76e7f5fb45" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.219:5353: i/o timeout" Mar 14 09:23:24 crc kubenswrapper[4869]: I0314 09:23:24.634737 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:24 crc kubenswrapper[4869]: I0314 09:23:24.634955 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:25 crc kubenswrapper[4869]: I0314 09:23:25.705494 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:23:25 crc kubenswrapper[4869]: E0314 09:23:25.707055 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:23:29 crc kubenswrapper[4869]: I0314 09:23:29.438797 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0cf5e02-e6ac-4c54-a514-948485fd56fb" containerID="e49c20e0db4363df7ab9b81c9e721d25ed583852678ce43bf2ff0a329c713947" exitCode=0 Mar 14 09:23:29 crc kubenswrapper[4869]: I0314 09:23:29.438839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjwxx" event={"ID":"c0cf5e02-e6ac-4c54-a514-948485fd56fb","Type":"ContainerDied","Data":"e49c20e0db4363df7ab9b81c9e721d25ed583852678ce43bf2ff0a329c713947"} Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.707282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.708756 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.718381 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.860712 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.994494 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data\") pod \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.994842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j44gx\" (UniqueName: \"kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx\") pod \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.995266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts\") pod \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " Mar 14 09:23:30 crc kubenswrapper[4869]: I0314 09:23:30.995808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle\") pod \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\" (UID: \"c0cf5e02-e6ac-4c54-a514-948485fd56fb\") " Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.003193 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts" (OuterVolumeSpecName: "scripts") pod "c0cf5e02-e6ac-4c54-a514-948485fd56fb" (UID: "c0cf5e02-e6ac-4c54-a514-948485fd56fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.003932 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx" (OuterVolumeSpecName: "kube-api-access-j44gx") pod "c0cf5e02-e6ac-4c54-a514-948485fd56fb" (UID: "c0cf5e02-e6ac-4c54-a514-948485fd56fb"). InnerVolumeSpecName "kube-api-access-j44gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.045719 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data" (OuterVolumeSpecName: "config-data") pod "c0cf5e02-e6ac-4c54-a514-948485fd56fb" (UID: "c0cf5e02-e6ac-4c54-a514-948485fd56fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.047488 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0cf5e02-e6ac-4c54-a514-948485fd56fb" (UID: "c0cf5e02-e6ac-4c54-a514-948485fd56fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.100297 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-scripts\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.100357 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.100374 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0cf5e02-e6ac-4c54-a514-948485fd56fb-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.100388 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j44gx\" (UniqueName: \"kubernetes.io/projected/c0cf5e02-e6ac-4c54-a514-948485fd56fb-kube-api-access-j44gx\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.481615 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xjwxx" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.481700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xjwxx" event={"ID":"c0cf5e02-e6ac-4c54-a514-948485fd56fb","Type":"ContainerDied","Data":"9e2a174b67670088301a266fb0ea7e40ffcc6e84825d1a98592081478326614c"} Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.482934 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2a174b67670088301a266fb0ea7e40ffcc6e84825d1a98592081478326614c" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.493218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.660046 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.660465 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-log" containerID="cri-o://f60c61952a35954c7fc82af2f43f4d99846ecbcea2f56d89e01080bd136c64be" gracePeriod=30 Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.660743 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-api" containerID="cri-o://fb02c5a2afd45e3966568287ae39cc48081429e7cf350279bb85e6f026d99ab5" gracePeriod=30 Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.681210 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.682825 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerName="nova-scheduler-scheduler" containerID="cri-o://ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" gracePeriod=30 Mar 14 09:23:31 crc kubenswrapper[4869]: I0314 09:23:31.734372 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:32 crc kubenswrapper[4869]: I0314 09:23:32.493326 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5a38738-41fd-46a0-9a57-161e783f9039" containerID="f60c61952a35954c7fc82af2f43f4d99846ecbcea2f56d89e01080bd136c64be" exitCode=143 Mar 14 09:23:32 crc kubenswrapper[4869]: I0314 09:23:32.493422 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerDied","Data":"f60c61952a35954c7fc82af2f43f4d99846ecbcea2f56d89e01080bd136c64be"} Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.508129 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5a38738-41fd-46a0-9a57-161e783f9039" containerID="fb02c5a2afd45e3966568287ae39cc48081429e7cf350279bb85e6f026d99ab5" exitCode=0 Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.508201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerDied","Data":"fb02c5a2afd45e3966568287ae39cc48081429e7cf350279bb85e6f026d99ab5"} Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.509075 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-log" containerID="cri-o://7913851fa6ce4bd6f4ba472822f210e9a0d5d4c63a7997cb4986761b88ac1780" gracePeriod=30 Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.509145 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-metadata" containerID="cri-o://1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840" gracePeriod=30 Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.715777 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752443 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752674 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752752 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bn4l\" (UniqueName: \"kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.752834 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs\") pod \"b5a38738-41fd-46a0-9a57-161e783f9039\" (UID: \"b5a38738-41fd-46a0-9a57-161e783f9039\") " Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.753690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs" (OuterVolumeSpecName: "logs") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.757523 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5a38738-41fd-46a0-9a57-161e783f9039-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.763712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l" (OuterVolumeSpecName: "kube-api-access-6bn4l") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "kube-api-access-6bn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.801378 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.804413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data" (OuterVolumeSpecName: "config-data") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.812968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.822707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b5a38738-41fd-46a0-9a57-161e783f9039" (UID: "b5a38738-41fd-46a0-9a57-161e783f9039"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.860758 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.860933 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.861044 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.861233 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bn4l\" (UniqueName: \"kubernetes.io/projected/b5a38738-41fd-46a0-9a57-161e783f9039-kube-api-access-6bn4l\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:33 crc kubenswrapper[4869]: I0314 09:23:33.861297 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5a38738-41fd-46a0-9a57-161e783f9039-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.520109 4869 generic.go:334] "Generic (PLEG): container finished" podID="54953738-53fb-40f9-af9a-aa447ac59a20" containerID="1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840" exitCode=0 Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.520618 4869 generic.go:334] "Generic (PLEG): container finished" podID="54953738-53fb-40f9-af9a-aa447ac59a20" containerID="7913851fa6ce4bd6f4ba472822f210e9a0d5d4c63a7997cb4986761b88ac1780" exitCode=143 Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.520203 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerDied","Data":"1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840"} Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.520700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerDied","Data":"7913851fa6ce4bd6f4ba472822f210e9a0d5d4c63a7997cb4986761b88ac1780"} Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.524474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5a38738-41fd-46a0-9a57-161e783f9039","Type":"ContainerDied","Data":"9cb7108392e9059809600b7eb87f51349f2b5cba38a5e564043a765e984832e4"} Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.524566 4869 scope.go:117] "RemoveContainer" containerID="fb02c5a2afd45e3966568287ae39cc48081429e7cf350279bb85e6f026d99ab5" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.524731 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.566829 4869 scope.go:117] "RemoveContainer" containerID="f60c61952a35954c7fc82af2f43f4d99846ecbcea2f56d89e01080bd136c64be" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.569992 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:34 crc kubenswrapper[4869]: E0314 09:23:34.578625 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54953738_53fb_40f9_af9a_aa447ac59a20.slice/crio-conmon-1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.620576 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.649073 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:34 crc kubenswrapper[4869]: E0314 09:23:34.652403 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-api" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.652799 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-api" Mar 14 09:23:34 crc kubenswrapper[4869]: E0314 09:23:34.652863 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf5e02-e6ac-4c54-a514-948485fd56fb" containerName="nova-manage" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.652873 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf5e02-e6ac-4c54-a514-948485fd56fb" containerName="nova-manage" Mar 14 09:23:34 crc kubenswrapper[4869]: E0314 09:23:34.652889 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-log" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.652897 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-log" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.653204 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cf5e02-e6ac-4c54-a514-948485fd56fb" containerName="nova-manage" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.653238 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-log" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.653253 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" containerName="nova-api-api" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.656807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.660596 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.661097 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.661240 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.661241 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-config-data\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682587 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2bc163-f581-4326-90e9-2011f06c6c7f-logs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8gvr\" (UniqueName: \"kubernetes.io/projected/9c2bc163-f581-4326-90e9-2011f06c6c7f-kube-api-access-s8gvr\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.682798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.704323 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-config-data\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786325 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2bc163-f581-4326-90e9-2011f06c6c7f-logs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8gvr\" (UniqueName: \"kubernetes.io/projected/9c2bc163-f581-4326-90e9-2011f06c6c7f-kube-api-access-s8gvr\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.786584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.787131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c2bc163-f581-4326-90e9-2011f06c6c7f-logs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.791216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.791950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.793360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-config-data\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.796609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2bc163-f581-4326-90e9-2011f06c6c7f-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.804489 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8gvr\" (UniqueName: \"kubernetes.io/projected/9c2bc163-f581-4326-90e9-2011f06c6c7f-kube-api-access-s8gvr\") pod \"nova-api-0\" (UID: \"9c2bc163-f581-4326-90e9-2011f06c6c7f\") " pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.925773 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.989762 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.990368 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle\") pod \"54953738-53fb-40f9-af9a-aa447ac59a20\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.990523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf2d4\" (UniqueName: \"kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4\") pod \"54953738-53fb-40f9-af9a-aa447ac59a20\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.990624 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs\") pod \"54953738-53fb-40f9-af9a-aa447ac59a20\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.990652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data\") pod \"54953738-53fb-40f9-af9a-aa447ac59a20\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.990688 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs\") pod \"54953738-53fb-40f9-af9a-aa447ac59a20\" (UID: \"54953738-53fb-40f9-af9a-aa447ac59a20\") " Mar 14 09:23:34 crc kubenswrapper[4869]: I0314 09:23:34.992669 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs" (OuterVolumeSpecName: "logs") pod "54953738-53fb-40f9-af9a-aa447ac59a20" (UID: "54953738-53fb-40f9-af9a-aa447ac59a20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.005574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4" (OuterVolumeSpecName: "kube-api-access-zf2d4") pod "54953738-53fb-40f9-af9a-aa447ac59a20" (UID: "54953738-53fb-40f9-af9a-aa447ac59a20"). InnerVolumeSpecName "kube-api-access-zf2d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.078718 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54953738-53fb-40f9-af9a-aa447ac59a20" (UID: "54953738-53fb-40f9-af9a-aa447ac59a20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.092566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data" (OuterVolumeSpecName: "config-data") pod "54953738-53fb-40f9-af9a-aa447ac59a20" (UID: "54953738-53fb-40f9-af9a-aa447ac59a20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.093024 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.093058 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf2d4\" (UniqueName: \"kubernetes.io/projected/54953738-53fb-40f9-af9a-aa447ac59a20-kube-api-access-zf2d4\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.093068 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.093077 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54953738-53fb-40f9-af9a-aa447ac59a20-logs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.117426 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "54953738-53fb-40f9-af9a-aa447ac59a20" (UID: "54953738-53fb-40f9-af9a-aa447ac59a20"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.136915 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d is running failed: container process not found" containerID="ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.137712 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d is running failed: container process not found" containerID="ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.138069 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d is running failed: container process not found" containerID="ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.138103 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerName="nova-scheduler-scheduler" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.196093 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54953738-53fb-40f9-af9a-aa447ac59a20-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.517484 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 14 09:23:35 crc kubenswrapper[4869]: W0314 09:23:35.520429 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2bc163_f581_4326_90e9_2011f06c6c7f.slice/crio-26f7018b3a4060849b7d6b52a03cb90461d558f2831cc4464e9f8ee16f4a5c5f WatchSource:0}: Error finding container 26f7018b3a4060849b7d6b52a03cb90461d558f2831cc4464e9f8ee16f4a5c5f: Status 404 returned error can't find the container with id 26f7018b3a4060849b7d6b52a03cb90461d558f2831cc4464e9f8ee16f4a5c5f Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.537746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54953738-53fb-40f9-af9a-aa447ac59a20","Type":"ContainerDied","Data":"7dc601b9ef94d1857a8a7e5babe8ea3a6cfbc7e6d7e24e885800d12df861e208"} Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.537799 4869 scope.go:117] "RemoveContainer" containerID="1c9105f5e95cb2cc6e2cf7589198cdd278adac24beab71adc3f74ca451359840" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.537935 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.560350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c2bc163-f581-4326-90e9-2011f06c6c7f","Type":"ContainerStarted","Data":"26f7018b3a4060849b7d6b52a03cb90461d558f2831cc4464e9f8ee16f4a5c5f"} Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.566685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3"} Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.583702 4869 generic.go:334] "Generic (PLEG): container finished" podID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerID="ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" exitCode=0 Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.583796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1fb6f926-e8cc-491c-a982-a06813be3fba","Type":"ContainerDied","Data":"ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d"} Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.694659 4869 scope.go:117] "RemoveContainer" containerID="7913851fa6ce4bd6f4ba472822f210e9a0d5d4c63a7997cb4986761b88ac1780" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.711425 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.718496 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a38738-41fd-46a0-9a57-161e783f9039" path="/var/lib/kubelet/pods/b5a38738-41fd-46a0-9a57-161e783f9039/volumes" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.748536 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.776911 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.794064 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.794844 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerName="nova-scheduler-scheduler" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.794959 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerName="nova-scheduler-scheduler" Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.795051 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-log" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.795112 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-log" Mar 14 09:23:35 crc kubenswrapper[4869]: E0314 09:23:35.795193 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-metadata" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.795256 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-metadata" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.796947 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-log" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.797056 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" containerName="nova-metadata-metadata" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.797129 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" containerName="nova-scheduler-scheduler" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.804636 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.808046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.808218 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.814655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knd77\" (UniqueName: \"kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77\") pod \"1fb6f926-e8cc-491c-a982-a06813be3fba\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.814937 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle\") pod \"1fb6f926-e8cc-491c-a982-a06813be3fba\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.815188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data\") pod \"1fb6f926-e8cc-491c-a982-a06813be3fba\" (UID: \"1fb6f926-e8cc-491c-a982-a06813be3fba\") " Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.829205 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77" (OuterVolumeSpecName: "kube-api-access-knd77") pod "1fb6f926-e8cc-491c-a982-a06813be3fba" (UID: "1fb6f926-e8cc-491c-a982-a06813be3fba"). InnerVolumeSpecName "kube-api-access-knd77". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.836774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.856480 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fb6f926-e8cc-491c-a982-a06813be3fba" (UID: "1fb6f926-e8cc-491c-a982-a06813be3fba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.856953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data" (OuterVolumeSpecName: "config-data") pod "1fb6f926-e8cc-491c-a982-a06813be3fba" (UID: "1fb6f926-e8cc-491c-a982-a06813be3fba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.917654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13396f06-e344-4bac-996f-aea1d8f3f547-logs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7gx\" (UniqueName: \"kubernetes.io/projected/13396f06-e344-4bac-996f-aea1d8f3f547-kube-api-access-9c7gx\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-config-data\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918732 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knd77\" (UniqueName: \"kubernetes.io/projected/1fb6f926-e8cc-491c-a982-a06813be3fba-kube-api-access-knd77\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918804 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:35 crc kubenswrapper[4869]: I0314 09:23:35.918862 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb6f926-e8cc-491c-a982-a06813be3fba-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.021075 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13396f06-e344-4bac-996f-aea1d8f3f547-logs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.021296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.021434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7gx\" (UniqueName: \"kubernetes.io/projected/13396f06-e344-4bac-996f-aea1d8f3f547-kube-api-access-9c7gx\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.021553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-config-data\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.021690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13396f06-e344-4bac-996f-aea1d8f3f547-logs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.022328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.026164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-config-data\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.026162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.034412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/13396f06-e344-4bac-996f-aea1d8f3f547-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.036629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7gx\" (UniqueName: \"kubernetes.io/projected/13396f06-e344-4bac-996f-aea1d8f3f547-kube-api-access-9c7gx\") pod \"nova-metadata-0\" (UID: \"13396f06-e344-4bac-996f-aea1d8f3f547\") " pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.196629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.596472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c2bc163-f581-4326-90e9-2011f06c6c7f","Type":"ContainerStarted","Data":"30b177925059e5ac22f442fa82630f98142909138df78ee7ad5a70f4fbb1c24e"} Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.596880 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c2bc163-f581-4326-90e9-2011f06c6c7f","Type":"ContainerStarted","Data":"aa3c7b246e771ba20892de8acbccbdd24563e851027b1ee25348dffa28087378"} Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.600108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1fb6f926-e8cc-491c-a982-a06813be3fba","Type":"ContainerDied","Data":"237f6c2d9e49db5a1d9b3a13e9418b507ba1c98af97f0331205a7387e396d2d5"} Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.600166 4869 scope.go:117] "RemoveContainer" containerID="ce0cbcaa4cd8de7550f91bf32ce947f958d6eab078764c8b2e516b0b8c58dc6d" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.600163 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.653502 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.653472357 podStartE2EDuration="2.653472357s" podCreationTimestamp="2026-03-14 09:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:36.627927727 +0000 UTC m=+1569.600209800" watchObservedRunningTime="2026-03-14 09:23:36.653472357 +0000 UTC m=+1569.625754410" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.707499 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.730101 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.774588 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.775962 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.780932 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.829827 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.862621 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.886446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tjl\" (UniqueName: \"kubernetes.io/projected/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-kube-api-access-j9tjl\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.887254 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-config-data\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.887413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.989090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tjl\" (UniqueName: \"kubernetes.io/projected/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-kube-api-access-j9tjl\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.989262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-config-data\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.989419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.996304 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-config-data\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:36 crc kubenswrapper[4869]: I0314 09:23:36.998230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.006235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tjl\" (UniqueName: \"kubernetes.io/projected/80f6544c-c2ee-4b23-9de0-2b46a87aabe7-kube-api-access-j9tjl\") pod \"nova-scheduler-0\" (UID: \"80f6544c-c2ee-4b23-9de0-2b46a87aabe7\") " pod="openstack/nova-scheduler-0" Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.186988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.620031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13396f06-e344-4bac-996f-aea1d8f3f547","Type":"ContainerStarted","Data":"201fc0a0cabbd31e35ffb2af93908b2a0969f71921849cef40c6072ef4b79b99"} Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.620650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13396f06-e344-4bac-996f-aea1d8f3f547","Type":"ContainerStarted","Data":"01f7b2f62010f1ff979a4f050da351f90de3e6ab671ae28f361bc269ed443c6b"} Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.620671 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13396f06-e344-4bac-996f-aea1d8f3f547","Type":"ContainerStarted","Data":"a25845df8d00cc9cee9a788640207fdb7002f1de0a6ad0abdd3a81d6a13d70fe"} Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.654728 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.654710875 podStartE2EDuration="2.654710875s" podCreationTimestamp="2026-03-14 09:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:37.64795749 +0000 UTC m=+1570.620239563" watchObservedRunningTime="2026-03-14 09:23:37.654710875 +0000 UTC m=+1570.626992928" Mar 14 09:23:37 crc kubenswrapper[4869]: W0314 09:23:37.699064 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f6544c_c2ee_4b23_9de0_2b46a87aabe7.slice/crio-b6b0ae33fbda21b3e32ad36c3abe4319534d414f9a9a54b25a8aa9623185b878 WatchSource:0}: Error finding container b6b0ae33fbda21b3e32ad36c3abe4319534d414f9a9a54b25a8aa9623185b878: Status 404 returned error can't find the container with id b6b0ae33fbda21b3e32ad36c3abe4319534d414f9a9a54b25a8aa9623185b878 Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.726352 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb6f926-e8cc-491c-a982-a06813be3fba" path="/var/lib/kubelet/pods/1fb6f926-e8cc-491c-a982-a06813be3fba/volumes" Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.727254 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54953738-53fb-40f9-af9a-aa447ac59a20" path="/var/lib/kubelet/pods/54953738-53fb-40f9-af9a-aa447ac59a20/volumes" Mar 14 09:23:37 crc kubenswrapper[4869]: I0314 09:23:37.728083 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.626119 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.628788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.641989 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.666760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"80f6544c-c2ee-4b23-9de0-2b46a87aabe7","Type":"ContainerStarted","Data":"eb3c3420eb9db23eb0c4a6db257bd3430d65915e46581adbc023992ed2439534"} Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.666835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"80f6544c-c2ee-4b23-9de0-2b46a87aabe7","Type":"ContainerStarted","Data":"b6b0ae33fbda21b3e32ad36c3abe4319534d414f9a9a54b25a8aa9623185b878"} Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.686042 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.686018846 podStartE2EDuration="2.686018846s" podCreationTimestamp="2026-03-14 09:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:23:38.683822391 +0000 UTC m=+1571.656104444" watchObservedRunningTime="2026-03-14 09:23:38.686018846 +0000 UTC m=+1571.658300909" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.732987 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.733057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djf4\" (UniqueName: \"kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.733111 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.835204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.835757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djf4\" (UniqueName: \"kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.835811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.836368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.838040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.861262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djf4\" (UniqueName: \"kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4\") pod \"community-operators-7bmz5\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:38 crc kubenswrapper[4869]: I0314 09:23:38.955320 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:39 crc kubenswrapper[4869]: I0314 09:23:39.534571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:39 crc kubenswrapper[4869]: W0314 09:23:39.549471 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d55227_1099_411b_ae50_a12f18c326d7.slice/crio-6fefe89220eda0d9a6e5c6fbb7ddc12cc96f611680e1b54d45c098e8920d4282 WatchSource:0}: Error finding container 6fefe89220eda0d9a6e5c6fbb7ddc12cc96f611680e1b54d45c098e8920d4282: Status 404 returned error can't find the container with id 6fefe89220eda0d9a6e5c6fbb7ddc12cc96f611680e1b54d45c098e8920d4282 Mar 14 09:23:39 crc kubenswrapper[4869]: I0314 09:23:39.605006 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:23:39 crc kubenswrapper[4869]: I0314 09:23:39.605288 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:23:39 crc kubenswrapper[4869]: I0314 09:23:39.676430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerStarted","Data":"6fefe89220eda0d9a6e5c6fbb7ddc12cc96f611680e1b54d45c098e8920d4282"} Mar 14 09:23:39 crc kubenswrapper[4869]: I0314 09:23:39.705244 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:23:40 crc kubenswrapper[4869]: I0314 09:23:40.691392 4869 generic.go:334] "Generic (PLEG): container finished" podID="b9d55227-1099-411b-ae50-a12f18c326d7" containerID="18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f" exitCode=0 Mar 14 09:23:40 crc kubenswrapper[4869]: I0314 09:23:40.691813 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerDied","Data":"18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f"} Mar 14 09:23:40 crc kubenswrapper[4869]: I0314 09:23:40.697891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572"} Mar 14 09:23:41 crc kubenswrapper[4869]: I0314 09:23:41.197434 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:23:41 crc kubenswrapper[4869]: I0314 09:23:41.197784 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 14 09:23:41 crc kubenswrapper[4869]: I0314 09:23:41.718271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerStarted","Data":"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc"} Mar 14 09:23:42 crc kubenswrapper[4869]: I0314 09:23:42.188096 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 14 09:23:42 crc kubenswrapper[4869]: I0314 09:23:42.732473 4869 generic.go:334] "Generic (PLEG): container finished" podID="b9d55227-1099-411b-ae50-a12f18c326d7" containerID="b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc" exitCode=0 Mar 14 09:23:42 crc kubenswrapper[4869]: I0314 09:23:42.732563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerDied","Data":"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc"} Mar 14 09:23:43 crc kubenswrapper[4869]: I0314 09:23:43.744837 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerStarted","Data":"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b"} Mar 14 09:23:43 crc kubenswrapper[4869]: I0314 09:23:43.770783 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7bmz5" podStartSLOduration=3.273348159 podStartE2EDuration="5.770764265s" podCreationTimestamp="2026-03-14 09:23:38 +0000 UTC" firstStartedPulling="2026-03-14 09:23:40.693743952 +0000 UTC m=+1573.666026025" lastFinishedPulling="2026-03-14 09:23:43.191160078 +0000 UTC m=+1576.163442131" observedRunningTime="2026-03-14 09:23:43.761778073 +0000 UTC m=+1576.734060126" watchObservedRunningTime="2026-03-14 09:23:43.770764265 +0000 UTC m=+1576.743046308" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.404754 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.404816 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.539257 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.540379 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.754707 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" exitCode=1 Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.754765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3"} Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.755120 4869 scope.go:117] "RemoveContainer" containerID="f0f7639fd5dffa8979262ddbb9c432e80f03bf0bb6336a6703d621082a5bc06d" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.755558 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:23:44 crc kubenswrapper[4869]: E0314 09:23:44.755829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.993861 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:23:44 crc kubenswrapper[4869]: I0314 09:23:44.994272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 14 09:23:45 crc kubenswrapper[4869]: I0314 09:23:45.668371 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 14 09:23:45 crc kubenswrapper[4869]: I0314 09:23:45.773119 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:23:45 crc kubenswrapper[4869]: E0314 09:23:45.774047 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:23:45 crc kubenswrapper[4869]: I0314 09:23:45.994002 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:23:45 crc kubenswrapper[4869]: I0314 09:23:45.995931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.007700 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c2bc163-f581-4326-90e9-2011f06c6c7f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.007718 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c2bc163-f581-4326-90e9-2011f06c6c7f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.024961 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.100417 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.100501 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.100603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlm49\" (UniqueName: \"kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.197876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.197948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.203759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlm49\" (UniqueName: \"kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.203896 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.203940 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.204575 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.204686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.229720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlm49\" (UniqueName: \"kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49\") pod \"certified-operators-jbt7m\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.312476 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:46 crc kubenswrapper[4869]: W0314 09:23:46.864866 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3278e7dd_04ab_45b3_b39e_dd81d1447b15.slice/crio-e75ee718c1f438230102f0cc0b4fc04de2c82083d781a80f6e3be47881983fbf WatchSource:0}: Error finding container e75ee718c1f438230102f0cc0b4fc04de2c82083d781a80f6e3be47881983fbf: Status 404 returned error can't find the container with id e75ee718c1f438230102f0cc0b4fc04de2c82083d781a80f6e3be47881983fbf Mar 14 09:23:46 crc kubenswrapper[4869]: I0314 09:23:46.870245 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.187550 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.212693 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="13396f06-e344-4bac-996f-aea1d8f3f547" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.212762 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="13396f06-e344-4bac-996f-aea1d8f3f547" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.225824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.793130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerStarted","Data":"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7"} Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.793180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerStarted","Data":"e75ee718c1f438230102f0cc0b4fc04de2c82083d781a80f6e3be47881983fbf"} Mar 14 09:23:47 crc kubenswrapper[4869]: I0314 09:23:47.820557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 14 09:23:48 crc kubenswrapper[4869]: I0314 09:23:48.803418 4869 generic.go:334] "Generic (PLEG): container finished" podID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerID="3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7" exitCode=0 Mar 14 09:23:48 crc kubenswrapper[4869]: I0314 09:23:48.803460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerDied","Data":"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7"} Mar 14 09:23:48 crc kubenswrapper[4869]: I0314 09:23:48.956948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:48 crc kubenswrapper[4869]: I0314 09:23:48.956998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.002827 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.816790 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" exitCode=1 Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.816862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572"} Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.817182 4869 scope.go:117] "RemoveContainer" containerID="21f11eac93684c1ba6abc46bbf0189aee6da4111e34fcd4ae812408b334bcc2d" Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.818143 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:23:49 crc kubenswrapper[4869]: E0314 09:23:49.828165 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.834717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerStarted","Data":"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117"} Mar 14 09:23:49 crc kubenswrapper[4869]: I0314 09:23:49.903863 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:50 crc kubenswrapper[4869]: I0314 09:23:50.853973 4869 generic.go:334] "Generic (PLEG): container finished" podID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerID="d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117" exitCode=0 Mar 14 09:23:50 crc kubenswrapper[4869]: I0314 09:23:50.854883 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerDied","Data":"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117"} Mar 14 09:23:51 crc kubenswrapper[4869]: I0314 09:23:51.376756 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:51 crc kubenswrapper[4869]: I0314 09:23:51.865009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerStarted","Data":"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71"} Mar 14 09:23:51 crc kubenswrapper[4869]: I0314 09:23:51.865140 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7bmz5" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="registry-server" containerID="cri-o://5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b" gracePeriod=2 Mar 14 09:23:51 crc kubenswrapper[4869]: I0314 09:23:51.886116 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jbt7m" podStartSLOduration=4.170217898 podStartE2EDuration="6.886094782s" podCreationTimestamp="2026-03-14 09:23:45 +0000 UTC" firstStartedPulling="2026-03-14 09:23:48.80588226 +0000 UTC m=+1581.778164323" lastFinishedPulling="2026-03-14 09:23:51.521759154 +0000 UTC m=+1584.494041207" observedRunningTime="2026-03-14 09:23:51.882536994 +0000 UTC m=+1584.854819057" watchObservedRunningTime="2026-03-14 09:23:51.886094782 +0000 UTC m=+1584.858376835" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.347596 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.439956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9djf4\" (UniqueName: \"kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4\") pod \"b9d55227-1099-411b-ae50-a12f18c326d7\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.440094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities\") pod \"b9d55227-1099-411b-ae50-a12f18c326d7\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.440131 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content\") pod \"b9d55227-1099-411b-ae50-a12f18c326d7\" (UID: \"b9d55227-1099-411b-ae50-a12f18c326d7\") " Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.442286 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities" (OuterVolumeSpecName: "utilities") pod "b9d55227-1099-411b-ae50-a12f18c326d7" (UID: "b9d55227-1099-411b-ae50-a12f18c326d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.448815 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4" (OuterVolumeSpecName: "kube-api-access-9djf4") pod "b9d55227-1099-411b-ae50-a12f18c326d7" (UID: "b9d55227-1099-411b-ae50-a12f18c326d7"). InnerVolumeSpecName "kube-api-access-9djf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.501248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9d55227-1099-411b-ae50-a12f18c326d7" (UID: "b9d55227-1099-411b-ae50-a12f18c326d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.543762 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9djf4\" (UniqueName: \"kubernetes.io/projected/b9d55227-1099-411b-ae50-a12f18c326d7-kube-api-access-9djf4\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.544000 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.544079 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9d55227-1099-411b-ae50-a12f18c326d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.881160 4869 generic.go:334] "Generic (PLEG): container finished" podID="b9d55227-1099-411b-ae50-a12f18c326d7" containerID="5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b" exitCode=0 Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.881243 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bmz5" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.881248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerDied","Data":"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b"} Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.881910 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bmz5" event={"ID":"b9d55227-1099-411b-ae50-a12f18c326d7","Type":"ContainerDied","Data":"6fefe89220eda0d9a6e5c6fbb7ddc12cc96f611680e1b54d45c098e8920d4282"} Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.881962 4869 scope.go:117] "RemoveContainer" containerID="5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.911637 4869 scope.go:117] "RemoveContainer" containerID="b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.920760 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.935776 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7bmz5"] Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.944575 4869 scope.go:117] "RemoveContainer" containerID="18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.988472 4869 scope.go:117] "RemoveContainer" containerID="5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b" Mar 14 09:23:52 crc kubenswrapper[4869]: E0314 09:23:52.988949 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b\": container with ID starting with 5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b not found: ID does not exist" containerID="5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.988979 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b"} err="failed to get container status \"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b\": rpc error: code = NotFound desc = could not find container \"5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b\": container with ID starting with 5619350c8a795bc6bd0c1c618445ddacebe626e01c8e69bbfe0603f1d4ec0e2b not found: ID does not exist" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.989023 4869 scope.go:117] "RemoveContainer" containerID="b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc" Mar 14 09:23:52 crc kubenswrapper[4869]: E0314 09:23:52.989350 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc\": container with ID starting with b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc not found: ID does not exist" containerID="b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.989394 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc"} err="failed to get container status \"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc\": rpc error: code = NotFound desc = could not find container \"b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc\": container with ID starting with b2640876731437712be84a54fd414b082e4e90f5f217e04e1ce09c8cf88fb2cc not found: ID does not exist" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.989433 4869 scope.go:117] "RemoveContainer" containerID="18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f" Mar 14 09:23:52 crc kubenswrapper[4869]: E0314 09:23:52.989788 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f\": container with ID starting with 18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f not found: ID does not exist" containerID="18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f" Mar 14 09:23:52 crc kubenswrapper[4869]: I0314 09:23:52.989812 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f"} err="failed to get container status \"18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f\": rpc error: code = NotFound desc = could not find container \"18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f\": container with ID starting with 18812e1de5b9b3aa78d7587418f0a7d576c6c7adcec12930bb9269e19f11a91f not found: ID does not exist" Mar 14 09:23:53 crc kubenswrapper[4869]: I0314 09:23:53.718233 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" path="/var/lib/kubelet/pods/b9d55227-1099-411b-ae50-a12f18c326d7/volumes" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.405346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.405385 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.406076 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:23:54 crc kubenswrapper[4869]: E0314 09:23:54.406315 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.538973 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.539033 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:23:54 crc kubenswrapper[4869]: I0314 09:23:54.539916 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:23:54 crc kubenswrapper[4869]: E0314 09:23:54.540176 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.001252 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.001978 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.013653 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.016562 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.909669 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 14 09:23:55 crc kubenswrapper[4869]: I0314 09:23:55.922209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.207715 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.211090 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.212373 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.313057 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.313126 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.363272 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.937434 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 14 09:23:56 crc kubenswrapper[4869]: I0314 09:23:56.979616 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:57 crc kubenswrapper[4869]: I0314 09:23:57.376483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:23:58 crc kubenswrapper[4869]: I0314 09:23:58.937416 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jbt7m" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="registry-server" containerID="cri-o://244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71" gracePeriod=2 Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.397952 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.497549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content\") pod \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.497861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlm49\" (UniqueName: \"kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49\") pod \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.497946 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities\") pod \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\" (UID: \"3278e7dd-04ab-45b3-b39e-dd81d1447b15\") " Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.498726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities" (OuterVolumeSpecName: "utilities") pod "3278e7dd-04ab-45b3-b39e-dd81d1447b15" (UID: "3278e7dd-04ab-45b3-b39e-dd81d1447b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.499349 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.502923 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49" (OuterVolumeSpecName: "kube-api-access-nlm49") pod "3278e7dd-04ab-45b3-b39e-dd81d1447b15" (UID: "3278e7dd-04ab-45b3-b39e-dd81d1447b15"). InnerVolumeSpecName "kube-api-access-nlm49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.550014 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3278e7dd-04ab-45b3-b39e-dd81d1447b15" (UID: "3278e7dd-04ab-45b3-b39e-dd81d1447b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.600804 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3278e7dd-04ab-45b3-b39e-dd81d1447b15-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.600835 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlm49\" (UniqueName: \"kubernetes.io/projected/3278e7dd-04ab-45b3-b39e-dd81d1447b15-kube-api-access-nlm49\") on node \"crc\" DevicePath \"\"" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.953577 4869 generic.go:334] "Generic (PLEG): container finished" podID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerID="244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71" exitCode=0 Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.953671 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jbt7m" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.953669 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerDied","Data":"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71"} Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.954925 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jbt7m" event={"ID":"3278e7dd-04ab-45b3-b39e-dd81d1447b15","Type":"ContainerDied","Data":"e75ee718c1f438230102f0cc0b4fc04de2c82083d781a80f6e3be47881983fbf"} Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.954944 4869 scope.go:117] "RemoveContainer" containerID="244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.982476 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.984318 4869 scope.go:117] "RemoveContainer" containerID="d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117" Mar 14 09:23:59 crc kubenswrapper[4869]: I0314 09:23:59.991605 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jbt7m"] Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.008546 4869 scope.go:117] "RemoveContainer" containerID="3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.054246 4869 scope.go:117] "RemoveContainer" containerID="244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.055107 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71\": container with ID starting with 244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71 not found: ID does not exist" containerID="244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.055141 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71"} err="failed to get container status \"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71\": rpc error: code = NotFound desc = could not find container \"244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71\": container with ID starting with 244d75d2682f69b1ece2c5c8802df4d72e6016ebb3cd10d3996e07bac654cc71 not found: ID does not exist" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.055164 4869 scope.go:117] "RemoveContainer" containerID="d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.055426 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117\": container with ID starting with d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117 not found: ID does not exist" containerID="d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.055455 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117"} err="failed to get container status \"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117\": rpc error: code = NotFound desc = could not find container \"d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117\": container with ID starting with d6b60aee3b52b867f17264af5a8359ce23ed9b1e54a8906c3f5164d9035ba117 not found: ID does not exist" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.055476 4869 scope.go:117] "RemoveContainer" containerID="3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.055871 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7\": container with ID starting with 3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7 not found: ID does not exist" containerID="3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.055897 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7"} err="failed to get container status \"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7\": rpc error: code = NotFound desc = could not find container \"3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7\": container with ID starting with 3b64ffd675532bc10cfb11821724baad1700e4a79655eb8e8eede4daa66300e7 not found: ID does not exist" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.149197 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558004-w6wsr"] Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150272 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150297 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150316 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="extract-content" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150327 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="extract-content" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150346 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="extract-utilities" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150354 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="extract-utilities" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150374 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150381 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150397 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="extract-utilities" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150405 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="extract-utilities" Mar 14 09:24:00 crc kubenswrapper[4869]: E0314 09:24:00.150436 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="extract-content" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150444 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="extract-content" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150716 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d55227-1099-411b-ae50-a12f18c326d7" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.150746 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" containerName="registry-server" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.151586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.153778 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.154082 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.154261 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.160069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-w6wsr"] Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.222455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jq5\" (UniqueName: \"kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5\") pod \"auto-csr-approver-29558004-w6wsr\" (UID: \"727814cc-0540-4db5-8a75-690ae32817da\") " pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.324576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jq5\" (UniqueName: \"kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5\") pod \"auto-csr-approver-29558004-w6wsr\" (UID: \"727814cc-0540-4db5-8a75-690ae32817da\") " pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.344405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jq5\" (UniqueName: \"kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5\") pod \"auto-csr-approver-29558004-w6wsr\" (UID: \"727814cc-0540-4db5-8a75-690ae32817da\") " pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.481767 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.950813 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-w6wsr"] Mar 14 09:24:00 crc kubenswrapper[4869]: I0314 09:24:00.971919 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" event={"ID":"727814cc-0540-4db5-8a75-690ae32817da","Type":"ContainerStarted","Data":"b39a78bc4f5a8185f1849ba784e915f8d02b1ffa76d98d46b409d0fa8d789db0"} Mar 14 09:24:01 crc kubenswrapper[4869]: I0314 09:24:01.716055 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3278e7dd-04ab-45b3-b39e-dd81d1447b15" path="/var/lib/kubelet/pods/3278e7dd-04ab-45b3-b39e-dd81d1447b15/volumes" Mar 14 09:24:02 crc kubenswrapper[4869]: I0314 09:24:02.347232 4869 scope.go:117] "RemoveContainer" containerID="4217d63593d81e73c201fffcb2505d02ff5848fab2e75c31cedbb41324918ecd" Mar 14 09:24:02 crc kubenswrapper[4869]: I0314 09:24:02.996826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" event={"ID":"727814cc-0540-4db5-8a75-690ae32817da","Type":"ContainerStarted","Data":"2883e73047c3225f8d8ef5c13a0957ea8e2bfba859c6682072bb40421f2cb6f3"} Mar 14 09:24:03 crc kubenswrapper[4869]: I0314 09:24:03.014177 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" podStartSLOduration=1.523082424 podStartE2EDuration="3.014152275s" podCreationTimestamp="2026-03-14 09:24:00 +0000 UTC" firstStartedPulling="2026-03-14 09:24:00.955827561 +0000 UTC m=+1593.928109614" lastFinishedPulling="2026-03-14 09:24:02.446897412 +0000 UTC m=+1595.419179465" observedRunningTime="2026-03-14 09:24:03.011522111 +0000 UTC m=+1595.983804164" watchObservedRunningTime="2026-03-14 09:24:03.014152275 +0000 UTC m=+1595.986434338" Mar 14 09:24:04 crc kubenswrapper[4869]: I0314 09:24:04.008287 4869 generic.go:334] "Generic (PLEG): container finished" podID="727814cc-0540-4db5-8a75-690ae32817da" containerID="2883e73047c3225f8d8ef5c13a0957ea8e2bfba859c6682072bb40421f2cb6f3" exitCode=0 Mar 14 09:24:04 crc kubenswrapper[4869]: I0314 09:24:04.008341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" event={"ID":"727814cc-0540-4db5-8a75-690ae32817da","Type":"ContainerDied","Data":"2883e73047c3225f8d8ef5c13a0957ea8e2bfba859c6682072bb40421f2cb6f3"} Mar 14 09:24:05 crc kubenswrapper[4869]: I0314 09:24:05.385705 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:05 crc kubenswrapper[4869]: I0314 09:24:05.544237 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96jq5\" (UniqueName: \"kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5\") pod \"727814cc-0540-4db5-8a75-690ae32817da\" (UID: \"727814cc-0540-4db5-8a75-690ae32817da\") " Mar 14 09:24:05 crc kubenswrapper[4869]: I0314 09:24:05.550640 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5" (OuterVolumeSpecName: "kube-api-access-96jq5") pod "727814cc-0540-4db5-8a75-690ae32817da" (UID: "727814cc-0540-4db5-8a75-690ae32817da"). InnerVolumeSpecName "kube-api-access-96jq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:24:05 crc kubenswrapper[4869]: I0314 09:24:05.647369 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96jq5\" (UniqueName: \"kubernetes.io/projected/727814cc-0540-4db5-8a75-690ae32817da-kube-api-access-96jq5\") on node \"crc\" DevicePath \"\"" Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.034867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" event={"ID":"727814cc-0540-4db5-8a75-690ae32817da","Type":"ContainerDied","Data":"b39a78bc4f5a8185f1849ba784e915f8d02b1ffa76d98d46b409d0fa8d789db0"} Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.034907 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39a78bc4f5a8185f1849ba784e915f8d02b1ffa76d98d46b409d0fa8d789db0" Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.034957 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558004-w6wsr" Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.089342 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-725qn"] Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.099795 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29557998-725qn"] Mar 14 09:24:06 crc kubenswrapper[4869]: I0314 09:24:06.704349 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:24:06 crc kubenswrapper[4869]: E0314 09:24:06.704586 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:24:07 crc kubenswrapper[4869]: I0314 09:24:07.713996 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22c0c3e-3573-40f6-8bd9-000533db9955" path="/var/lib/kubelet/pods/d22c0c3e-3573-40f6-8bd9-000533db9955/volumes" Mar 14 09:24:08 crc kubenswrapper[4869]: I0314 09:24:08.706230 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:24:08 crc kubenswrapper[4869]: E0314 09:24:08.707109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:24:09 crc kubenswrapper[4869]: I0314 09:24:09.605230 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:24:09 crc kubenswrapper[4869]: I0314 09:24:09.605727 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:24:20 crc kubenswrapper[4869]: I0314 09:24:20.705117 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:24:20 crc kubenswrapper[4869]: E0314 09:24:20.706342 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:24:21 crc kubenswrapper[4869]: I0314 09:24:21.704942 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:24:21 crc kubenswrapper[4869]: E0314 09:24:21.705496 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:24:31 crc kubenswrapper[4869]: I0314 09:24:31.704003 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:24:31 crc kubenswrapper[4869]: E0314 09:24:31.704944 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:24:35 crc kubenswrapper[4869]: I0314 09:24:35.704175 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:24:35 crc kubenswrapper[4869]: E0314 09:24:35.705459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:24:39 crc kubenswrapper[4869]: I0314 09:24:39.605349 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:24:39 crc kubenswrapper[4869]: I0314 09:24:39.605907 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:24:39 crc kubenswrapper[4869]: I0314 09:24:39.605955 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:24:39 crc kubenswrapper[4869]: I0314 09:24:39.606827 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:24:39 crc kubenswrapper[4869]: I0314 09:24:39.606893 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" gracePeriod=600 Mar 14 09:24:39 crc kubenswrapper[4869]: E0314 09:24:39.725834 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:24:40 crc kubenswrapper[4869]: I0314 09:24:40.401803 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" exitCode=0 Mar 14 09:24:40 crc kubenswrapper[4869]: I0314 09:24:40.401892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311"} Mar 14 09:24:40 crc kubenswrapper[4869]: I0314 09:24:40.402195 4869 scope.go:117] "RemoveContainer" containerID="abbbfeab2461a01be6db6822d3b45b765d683a9778e55e7dd9c19e2a95f80e1d" Mar 14 09:24:40 crc kubenswrapper[4869]: I0314 09:24:40.403149 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:24:40 crc kubenswrapper[4869]: E0314 09:24:40.403579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:24:42 crc kubenswrapper[4869]: I0314 09:24:42.704302 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:24:42 crc kubenswrapper[4869]: E0314 09:24:42.704829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:24:48 crc kubenswrapper[4869]: I0314 09:24:48.705827 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:24:48 crc kubenswrapper[4869]: E0314 09:24:48.706943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:24:53 crc kubenswrapper[4869]: I0314 09:24:53.704952 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:24:53 crc kubenswrapper[4869]: E0314 09:24:53.705974 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:24:54 crc kubenswrapper[4869]: I0314 09:24:54.704727 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:24:54 crc kubenswrapper[4869]: E0314 09:24:54.705005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.167469 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:00 crc kubenswrapper[4869]: E0314 09:25:00.168741 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727814cc-0540-4db5-8a75-690ae32817da" containerName="oc" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.168759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="727814cc-0540-4db5-8a75-690ae32817da" containerName="oc" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.169019 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="727814cc-0540-4db5-8a75-690ae32817da" containerName="oc" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.170871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.177666 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.340656 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgd64\" (UniqueName: \"kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.340711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.340851 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.442409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.442546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgd64\" (UniqueName: \"kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.442580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.442958 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.442967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.461634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgd64\" (UniqueName: \"kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64\") pod \"redhat-marketplace-wn8ht\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.491810 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.704808 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:25:00 crc kubenswrapper[4869]: E0314 09:25:00.705448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:25:00 crc kubenswrapper[4869]: I0314 09:25:00.968607 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:01 crc kubenswrapper[4869]: I0314 09:25:01.630709 4869 generic.go:334] "Generic (PLEG): container finished" podID="765056fc-4422-4871-92e7-904de19bc8b2" containerID="2cf34472b05442bc5c38b671e88889ad40f0edc7298dc4cd16e1cfd85c77fd94" exitCode=0 Mar 14 09:25:01 crc kubenswrapper[4869]: I0314 09:25:01.630799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerDied","Data":"2cf34472b05442bc5c38b671e88889ad40f0edc7298dc4cd16e1cfd85c77fd94"} Mar 14 09:25:01 crc kubenswrapper[4869]: I0314 09:25:01.631017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerStarted","Data":"af19be092e39d6d767e917fe27bacb818b1d0324f6a6b6daaf27786b469dec89"} Mar 14 09:25:02 crc kubenswrapper[4869]: I0314 09:25:02.642305 4869 generic.go:334] "Generic (PLEG): container finished" podID="765056fc-4422-4871-92e7-904de19bc8b2" containerID="28b84bd14fadd3fa8b04eb5c538822c475409410eaa266182971069e45627038" exitCode=0 Mar 14 09:25:02 crc kubenswrapper[4869]: I0314 09:25:02.642404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerDied","Data":"28b84bd14fadd3fa8b04eb5c538822c475409410eaa266182971069e45627038"} Mar 14 09:25:02 crc kubenswrapper[4869]: I0314 09:25:02.748851 4869 scope.go:117] "RemoveContainer" containerID="195a69bdc40b87a3eccdc21bd245b09941a49b215cd821f013b05906852a42dd" Mar 14 09:25:02 crc kubenswrapper[4869]: I0314 09:25:02.794969 4869 scope.go:117] "RemoveContainer" containerID="cef5c259787334b288452ddfe57cea9ecaea2a29a7759fd3eea25dda860fb5fd" Mar 14 09:25:03 crc kubenswrapper[4869]: I0314 09:25:03.663746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerStarted","Data":"c63d36350756a54db26b70c88788acb3133bc0ebbf21429b2fec1af674181b81"} Mar 14 09:25:03 crc kubenswrapper[4869]: I0314 09:25:03.688131 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wn8ht" podStartSLOduration=2.28494531 podStartE2EDuration="3.688104373s" podCreationTimestamp="2026-03-14 09:25:00 +0000 UTC" firstStartedPulling="2026-03-14 09:25:01.633658774 +0000 UTC m=+1654.605940827" lastFinishedPulling="2026-03-14 09:25:03.036817837 +0000 UTC m=+1656.009099890" observedRunningTime="2026-03-14 09:25:03.680964476 +0000 UTC m=+1656.653246569" watchObservedRunningTime="2026-03-14 09:25:03.688104373 +0000 UTC m=+1656.660386436" Mar 14 09:25:06 crc kubenswrapper[4869]: I0314 09:25:06.703844 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:25:06 crc kubenswrapper[4869]: E0314 09:25:06.704401 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:25:07 crc kubenswrapper[4869]: I0314 09:25:07.714030 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:25:07 crc kubenswrapper[4869]: E0314 09:25:07.714774 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:25:10 crc kubenswrapper[4869]: I0314 09:25:10.492488 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:10 crc kubenswrapper[4869]: I0314 09:25:10.493144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:10 crc kubenswrapper[4869]: I0314 09:25:10.548165 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:10 crc kubenswrapper[4869]: I0314 09:25:10.794650 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:10 crc kubenswrapper[4869]: I0314 09:25:10.849265 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:12 crc kubenswrapper[4869]: I0314 09:25:12.753651 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wn8ht" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="registry-server" containerID="cri-o://c63d36350756a54db26b70c88788acb3133bc0ebbf21429b2fec1af674181b81" gracePeriod=2 Mar 14 09:25:13 crc kubenswrapper[4869]: I0314 09:25:13.767808 4869 generic.go:334] "Generic (PLEG): container finished" podID="765056fc-4422-4871-92e7-904de19bc8b2" containerID="c63d36350756a54db26b70c88788acb3133bc0ebbf21429b2fec1af674181b81" exitCode=0 Mar 14 09:25:13 crc kubenswrapper[4869]: I0314 09:25:13.767876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerDied","Data":"c63d36350756a54db26b70c88788acb3133bc0ebbf21429b2fec1af674181b81"} Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.231108 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.323385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content\") pod \"765056fc-4422-4871-92e7-904de19bc8b2\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.323553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgd64\" (UniqueName: \"kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64\") pod \"765056fc-4422-4871-92e7-904de19bc8b2\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.323732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities\") pod \"765056fc-4422-4871-92e7-904de19bc8b2\" (UID: \"765056fc-4422-4871-92e7-904de19bc8b2\") " Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.325071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities" (OuterVolumeSpecName: "utilities") pod "765056fc-4422-4871-92e7-904de19bc8b2" (UID: "765056fc-4422-4871-92e7-904de19bc8b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.330031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64" (OuterVolumeSpecName: "kube-api-access-bgd64") pod "765056fc-4422-4871-92e7-904de19bc8b2" (UID: "765056fc-4422-4871-92e7-904de19bc8b2"). InnerVolumeSpecName "kube-api-access-bgd64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.358569 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "765056fc-4422-4871-92e7-904de19bc8b2" (UID: "765056fc-4422-4871-92e7-904de19bc8b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.425639 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.425675 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765056fc-4422-4871-92e7-904de19bc8b2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.425688 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgd64\" (UniqueName: \"kubernetes.io/projected/765056fc-4422-4871-92e7-904de19bc8b2-kube-api-access-bgd64\") on node \"crc\" DevicePath \"\"" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.780078 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wn8ht" event={"ID":"765056fc-4422-4871-92e7-904de19bc8b2","Type":"ContainerDied","Data":"af19be092e39d6d767e917fe27bacb818b1d0324f6a6b6daaf27786b469dec89"} Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.780150 4869 scope.go:117] "RemoveContainer" containerID="c63d36350756a54db26b70c88788acb3133bc0ebbf21429b2fec1af674181b81" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.780150 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wn8ht" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.834166 4869 scope.go:117] "RemoveContainer" containerID="28b84bd14fadd3fa8b04eb5c538822c475409410eaa266182971069e45627038" Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.841756 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.850936 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wn8ht"] Mar 14 09:25:14 crc kubenswrapper[4869]: I0314 09:25:14.864108 4869 scope.go:117] "RemoveContainer" containerID="2cf34472b05442bc5c38b671e88889ad40f0edc7298dc4cd16e1cfd85c77fd94" Mar 14 09:25:15 crc kubenswrapper[4869]: I0314 09:25:15.704659 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:25:15 crc kubenswrapper[4869]: E0314 09:25:15.704903 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:25:15 crc kubenswrapper[4869]: I0314 09:25:15.717179 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765056fc-4422-4871-92e7-904de19bc8b2" path="/var/lib/kubelet/pods/765056fc-4422-4871-92e7-904de19bc8b2/volumes" Mar 14 09:25:19 crc kubenswrapper[4869]: I0314 09:25:19.707656 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:25:19 crc kubenswrapper[4869]: E0314 09:25:19.708377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:25:20 crc kubenswrapper[4869]: I0314 09:25:20.703951 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:25:20 crc kubenswrapper[4869]: E0314 09:25:20.704200 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:25:30 crc kubenswrapper[4869]: I0314 09:25:30.704412 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:25:30 crc kubenswrapper[4869]: E0314 09:25:30.705142 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:25:34 crc kubenswrapper[4869]: I0314 09:25:34.703945 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:25:34 crc kubenswrapper[4869]: E0314 09:25:34.704794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:25:34 crc kubenswrapper[4869]: I0314 09:25:34.704999 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:25:34 crc kubenswrapper[4869]: E0314 09:25:34.705455 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:25:41 crc kubenswrapper[4869]: I0314 09:25:41.704881 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:25:41 crc kubenswrapper[4869]: E0314 09:25:41.706287 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:25:46 crc kubenswrapper[4869]: I0314 09:25:46.703838 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:25:46 crc kubenswrapper[4869]: E0314 09:25:46.705701 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:25:48 crc kubenswrapper[4869]: I0314 09:25:48.703526 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:25:48 crc kubenswrapper[4869]: E0314 09:25:48.703874 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:25:56 crc kubenswrapper[4869]: I0314 09:25:56.704018 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:25:56 crc kubenswrapper[4869]: E0314 09:25:56.705222 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.158940 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558006-nqpnr"] Mar 14 09:26:00 crc kubenswrapper[4869]: E0314 09:26:00.159965 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.159981 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="extract-content" Mar 14 09:26:00 crc kubenswrapper[4869]: E0314 09:26:00.159998 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.160005 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4869]: E0314 09:26:00.160025 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.160031 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="extract-utilities" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.160239 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="765056fc-4422-4871-92e7-904de19bc8b2" containerName="registry-server" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.160925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.163725 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.164085 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.164149 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.173363 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-nqpnr"] Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.181033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tgl5\" (UniqueName: \"kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5\") pod \"auto-csr-approver-29558006-nqpnr\" (UID: \"11a586e9-6573-4c52-9036-27ca2d2dac17\") " pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.283281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tgl5\" (UniqueName: \"kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5\") pod \"auto-csr-approver-29558006-nqpnr\" (UID: \"11a586e9-6573-4c52-9036-27ca2d2dac17\") " pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.306324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tgl5\" (UniqueName: \"kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5\") pod \"auto-csr-approver-29558006-nqpnr\" (UID: \"11a586e9-6573-4c52-9036-27ca2d2dac17\") " pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.480826 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:00 crc kubenswrapper[4869]: I0314 09:26:00.987280 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-nqpnr"] Mar 14 09:26:01 crc kubenswrapper[4869]: I0314 09:26:01.322352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" event={"ID":"11a586e9-6573-4c52-9036-27ca2d2dac17","Type":"ContainerStarted","Data":"435fe9acd51a0b3fa0d4d8435203deff178540b790b62997882431c5f4ebb9cb"} Mar 14 09:26:01 crc kubenswrapper[4869]: I0314 09:26:01.704744 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:26:01 crc kubenswrapper[4869]: I0314 09:26:01.704855 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:26:01 crc kubenswrapper[4869]: E0314 09:26:01.705081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:26:01 crc kubenswrapper[4869]: E0314 09:26:01.705131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:26:02 crc kubenswrapper[4869]: I0314 09:26:02.332781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" event={"ID":"11a586e9-6573-4c52-9036-27ca2d2dac17","Type":"ContainerStarted","Data":"749185c05dc94aae5344dff36d22a508d7caaad054e7ab0dd431e070005dd822"} Mar 14 09:26:02 crc kubenswrapper[4869]: I0314 09:26:02.353256 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" podStartSLOduration=1.5237925570000002 podStartE2EDuration="2.353233958s" podCreationTimestamp="2026-03-14 09:26:00 +0000 UTC" firstStartedPulling="2026-03-14 09:26:00.987129859 +0000 UTC m=+1713.959411912" lastFinishedPulling="2026-03-14 09:26:01.81657126 +0000 UTC m=+1714.788853313" observedRunningTime="2026-03-14 09:26:02.345215 +0000 UTC m=+1715.317497073" watchObservedRunningTime="2026-03-14 09:26:02.353233958 +0000 UTC m=+1715.325516011" Mar 14 09:26:02 crc kubenswrapper[4869]: I0314 09:26:02.881118 4869 scope.go:117] "RemoveContainer" containerID="7de8d92e14a9f466f55da25c1007ed91c74b49efab49ce1a891348d1a268f783" Mar 14 09:26:03 crc kubenswrapper[4869]: I0314 09:26:03.344874 4869 generic.go:334] "Generic (PLEG): container finished" podID="11a586e9-6573-4c52-9036-27ca2d2dac17" containerID="749185c05dc94aae5344dff36d22a508d7caaad054e7ab0dd431e070005dd822" exitCode=0 Mar 14 09:26:03 crc kubenswrapper[4869]: I0314 09:26:03.344925 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" event={"ID":"11a586e9-6573-4c52-9036-27ca2d2dac17","Type":"ContainerDied","Data":"749185c05dc94aae5344dff36d22a508d7caaad054e7ab0dd431e070005dd822"} Mar 14 09:26:04 crc kubenswrapper[4869]: I0314 09:26:04.753824 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:04 crc kubenswrapper[4869]: I0314 09:26:04.774934 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tgl5\" (UniqueName: \"kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5\") pod \"11a586e9-6573-4c52-9036-27ca2d2dac17\" (UID: \"11a586e9-6573-4c52-9036-27ca2d2dac17\") " Mar 14 09:26:04 crc kubenswrapper[4869]: I0314 09:26:04.780769 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5" (OuterVolumeSpecName: "kube-api-access-6tgl5") pod "11a586e9-6573-4c52-9036-27ca2d2dac17" (UID: "11a586e9-6573-4c52-9036-27ca2d2dac17"). InnerVolumeSpecName "kube-api-access-6tgl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:26:04 crc kubenswrapper[4869]: I0314 09:26:04.876946 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tgl5\" (UniqueName: \"kubernetes.io/projected/11a586e9-6573-4c52-9036-27ca2d2dac17-kube-api-access-6tgl5\") on node \"crc\" DevicePath \"\"" Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.387188 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" event={"ID":"11a586e9-6573-4c52-9036-27ca2d2dac17","Type":"ContainerDied","Data":"435fe9acd51a0b3fa0d4d8435203deff178540b790b62997882431c5f4ebb9cb"} Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.387237 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435fe9acd51a0b3fa0d4d8435203deff178540b790b62997882431c5f4ebb9cb" Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.387267 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558006-nqpnr" Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.430750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-dzfv4"] Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.440207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558000-dzfv4"] Mar 14 09:26:05 crc kubenswrapper[4869]: I0314 09:26:05.721791 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d5ea86-e6ae-43ec-acd0-8123f0a60d87" path="/var/lib/kubelet/pods/98d5ea86-e6ae-43ec-acd0-8123f0a60d87/volumes" Mar 14 09:26:11 crc kubenswrapper[4869]: I0314 09:26:11.704319 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:26:11 crc kubenswrapper[4869]: E0314 09:26:11.705124 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:26:12 crc kubenswrapper[4869]: I0314 09:26:12.705616 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:26:12 crc kubenswrapper[4869]: E0314 09:26:12.706181 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:26:15 crc kubenswrapper[4869]: I0314 09:26:15.704329 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:26:15 crc kubenswrapper[4869]: E0314 09:26:15.704961 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:26:24 crc kubenswrapper[4869]: I0314 09:26:24.704768 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:26:24 crc kubenswrapper[4869]: E0314 09:26:24.706002 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:26:26 crc kubenswrapper[4869]: I0314 09:26:26.704216 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:26:27 crc kubenswrapper[4869]: I0314 09:26:27.639749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a"} Mar 14 09:26:27 crc kubenswrapper[4869]: I0314 09:26:27.717068 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:26:27 crc kubenswrapper[4869]: E0314 09:26:27.717533 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:26:34 crc kubenswrapper[4869]: I0314 09:26:34.539472 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:26:34 crc kubenswrapper[4869]: I0314 09:26:34.540166 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:26:35 crc kubenswrapper[4869]: I0314 09:26:35.721203 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" exitCode=1 Mar 14 09:26:35 crc kubenswrapper[4869]: I0314 09:26:35.721268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a"} Mar 14 09:26:35 crc kubenswrapper[4869]: I0314 09:26:35.721564 4869 scope.go:117] "RemoveContainer" containerID="18073f17d3c7d2879486cc20427eab918d04a29213a93c481fcf6e2ac6956ac3" Mar 14 09:26:35 crc kubenswrapper[4869]: I0314 09:26:35.722307 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:26:35 crc kubenswrapper[4869]: E0314 09:26:35.722540 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:26:36 crc kubenswrapper[4869]: I0314 09:26:36.704337 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:26:37 crc kubenswrapper[4869]: I0314 09:26:37.748059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd"} Mar 14 09:26:41 crc kubenswrapper[4869]: I0314 09:26:41.704688 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:26:41 crc kubenswrapper[4869]: E0314 09:26:41.705717 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:26:44 crc kubenswrapper[4869]: I0314 09:26:44.405187 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:26:44 crc kubenswrapper[4869]: I0314 09:26:44.405591 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:26:44 crc kubenswrapper[4869]: I0314 09:26:44.538982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:26:44 crc kubenswrapper[4869]: I0314 09:26:44.539041 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:26:44 crc kubenswrapper[4869]: I0314 09:26:44.539796 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:26:44 crc kubenswrapper[4869]: E0314 09:26:44.539995 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:26:45 crc kubenswrapper[4869]: I0314 09:26:45.857208 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" exitCode=1 Mar 14 09:26:45 crc kubenswrapper[4869]: I0314 09:26:45.857267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd"} Mar 14 09:26:45 crc kubenswrapper[4869]: I0314 09:26:45.857309 4869 scope.go:117] "RemoveContainer" containerID="083761fc24fa6e2f1074f4af2366f142e17ba519467b2862ae392b7cc9f39572" Mar 14 09:26:45 crc kubenswrapper[4869]: I0314 09:26:45.858130 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:26:45 crc kubenswrapper[4869]: E0314 09:26:45.858571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:26:54 crc kubenswrapper[4869]: I0314 09:26:54.404411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:26:54 crc kubenswrapper[4869]: I0314 09:26:54.404923 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:26:54 crc kubenswrapper[4869]: I0314 09:26:54.405922 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:26:54 crc kubenswrapper[4869]: E0314 09:26:54.406357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:26:56 crc kubenswrapper[4869]: I0314 09:26:56.704609 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:26:56 crc kubenswrapper[4869]: E0314 09:26:56.705305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:26:57 crc kubenswrapper[4869]: I0314 09:26:57.713336 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:26:57 crc kubenswrapper[4869]: E0314 09:26:57.713981 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:27:02 crc kubenswrapper[4869]: I0314 09:27:02.975411 4869 scope.go:117] "RemoveContainer" containerID="59357f3796fae4d201fbab2e627c8f81c9b517d99abd50f3c3c87f71d015b2c2" Mar 14 09:27:08 crc kubenswrapper[4869]: I0314 09:27:08.704434 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:27:08 crc kubenswrapper[4869]: I0314 09:27:08.705108 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:27:08 crc kubenswrapper[4869]: E0314 09:27:08.705339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:27:08 crc kubenswrapper[4869]: E0314 09:27:08.705384 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:27:09 crc kubenswrapper[4869]: I0314 09:27:09.705020 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:27:09 crc kubenswrapper[4869]: E0314 09:27:09.705686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:27:19 crc kubenswrapper[4869]: I0314 09:27:19.704326 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:27:19 crc kubenswrapper[4869]: E0314 09:27:19.705095 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:27:21 crc kubenswrapper[4869]: I0314 09:27:21.704829 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:27:21 crc kubenswrapper[4869]: E0314 09:27:21.705491 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:27:23 crc kubenswrapper[4869]: I0314 09:27:23.703957 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:27:23 crc kubenswrapper[4869]: E0314 09:27:23.704385 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:27:31 crc kubenswrapper[4869]: I0314 09:27:31.703918 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:27:31 crc kubenswrapper[4869]: E0314 09:27:31.704529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:27:35 crc kubenswrapper[4869]: I0314 09:27:35.704588 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:27:35 crc kubenswrapper[4869]: E0314 09:27:35.705235 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:27:36 crc kubenswrapper[4869]: I0314 09:27:36.704348 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:27:36 crc kubenswrapper[4869]: E0314 09:27:36.704679 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:27:46 crc kubenswrapper[4869]: I0314 09:27:46.703935 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:27:46 crc kubenswrapper[4869]: E0314 09:27:46.704817 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:27:47 crc kubenswrapper[4869]: I0314 09:27:47.711218 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:27:47 crc kubenswrapper[4869]: E0314 09:27:47.711712 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:27:50 crc kubenswrapper[4869]: I0314 09:27:50.704253 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:27:50 crc kubenswrapper[4869]: E0314 09:27:50.704797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:27:57 crc kubenswrapper[4869]: I0314 09:27:57.714918 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:27:57 crc kubenswrapper[4869]: E0314 09:27:57.715689 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:27:58 crc kubenswrapper[4869]: I0314 09:27:58.704075 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:27:58 crc kubenswrapper[4869]: E0314 09:27:58.704797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.152246 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558008-dv5r4"] Mar 14 09:28:00 crc kubenswrapper[4869]: E0314 09:28:00.153276 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a586e9-6573-4c52-9036-27ca2d2dac17" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.153302 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a586e9-6573-4c52-9036-27ca2d2dac17" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.153574 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a586e9-6573-4c52-9036-27ca2d2dac17" containerName="oc" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.154501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.157288 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.158035 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.158161 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.166782 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-dv5r4"] Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.219948 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2ld\" (UniqueName: \"kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld\") pod \"auto-csr-approver-29558008-dv5r4\" (UID: \"a7c45ab1-4de5-46c0-92c7-46fd95f53f74\") " pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.321769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2ld\" (UniqueName: \"kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld\") pod \"auto-csr-approver-29558008-dv5r4\" (UID: \"a7c45ab1-4de5-46c0-92c7-46fd95f53f74\") " pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.343632 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2ld\" (UniqueName: \"kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld\") pod \"auto-csr-approver-29558008-dv5r4\" (UID: \"a7c45ab1-4de5-46c0-92c7-46fd95f53f74\") " pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.474873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:00 crc kubenswrapper[4869]: I0314 09:28:00.957743 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-dv5r4"] Mar 14 09:28:01 crc kubenswrapper[4869]: I0314 09:28:01.618522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" event={"ID":"a7c45ab1-4de5-46c0-92c7-46fd95f53f74","Type":"ContainerStarted","Data":"59ab6806b4a242b2f4e5d7bd4f5373ed1576b36156ca0b2951923ea0fc0a2860"} Mar 14 09:28:02 crc kubenswrapper[4869]: I0314 09:28:02.038238 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0e4e-account-create-update-gt57r"] Mar 14 09:28:02 crc kubenswrapper[4869]: I0314 09:28:02.047836 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0e4e-account-create-update-gt57r"] Mar 14 09:28:02 crc kubenswrapper[4869]: I0314 09:28:02.629869 4869 generic.go:334] "Generic (PLEG): container finished" podID="a7c45ab1-4de5-46c0-92c7-46fd95f53f74" containerID="70e1bf95f904f3b5fd934c7093e7d2268fe3294460e3fda035d65d68e6b7479e" exitCode=0 Mar 14 09:28:02 crc kubenswrapper[4869]: I0314 09:28:02.629924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" event={"ID":"a7c45ab1-4de5-46c0-92c7-46fd95f53f74","Type":"ContainerDied","Data":"70e1bf95f904f3b5fd934c7093e7d2268fe3294460e3fda035d65d68e6b7479e"} Mar 14 09:28:02 crc kubenswrapper[4869]: I0314 09:28:02.703771 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:28:02 crc kubenswrapper[4869]: E0314 09:28:02.704168 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.039958 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-9v9bw"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.049699 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-5fe4-account-create-update-fzp8k"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.059900 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9g9kc"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.069496 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-9v9bw"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.078114 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-5fe4-account-create-update-fzp8k"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.088189 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9g9kc"] Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.091194 4869 scope.go:117] "RemoveContainer" containerID="1a8e4e22e31e902e0f9b62898ddd127340e229eeca98669f4d80fc0110ef607d" Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.729999 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012663ea-c91c-4157-b2e3-a11a65a9a6d1" path="/var/lib/kubelet/pods/012663ea-c91c-4157-b2e3-a11a65a9a6d1/volumes" Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.731547 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d18cddc-84fe-40cd-87c2-041c2f7bcaa1" path="/var/lib/kubelet/pods/0d18cddc-84fe-40cd-87c2-041c2f7bcaa1/volumes" Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.733823 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e55491-9c61-49eb-84fb-38ada8084c67" path="/var/lib/kubelet/pods/97e55491-9c61-49eb-84fb-38ada8084c67/volumes" Mar 14 09:28:03 crc kubenswrapper[4869]: I0314 09:28:03.734918 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74b0055-dff6-4cff-82ec-6fd1abdc5a9c" path="/var/lib/kubelet/pods/b74b0055-dff6-4cff-82ec-6fd1abdc5a9c/volumes" Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.032225 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.099480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2ld\" (UniqueName: \"kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld\") pod \"a7c45ab1-4de5-46c0-92c7-46fd95f53f74\" (UID: \"a7c45ab1-4de5-46c0-92c7-46fd95f53f74\") " Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.105038 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld" (OuterVolumeSpecName: "kube-api-access-rj2ld") pod "a7c45ab1-4de5-46c0-92c7-46fd95f53f74" (UID: "a7c45ab1-4de5-46c0-92c7-46fd95f53f74"). InnerVolumeSpecName "kube-api-access-rj2ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.202542 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2ld\" (UniqueName: \"kubernetes.io/projected/a7c45ab1-4de5-46c0-92c7-46fd95f53f74-kube-api-access-rj2ld\") on node \"crc\" DevicePath \"\"" Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.651997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" event={"ID":"a7c45ab1-4de5-46c0-92c7-46fd95f53f74","Type":"ContainerDied","Data":"59ab6806b4a242b2f4e5d7bd4f5373ed1576b36156ca0b2951923ea0fc0a2860"} Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.652320 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ab6806b4a242b2f4e5d7bd4f5373ed1576b36156ca0b2951923ea0fc0a2860" Mar 14 09:28:04 crc kubenswrapper[4869]: I0314 09:28:04.652074 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558008-dv5r4" Mar 14 09:28:05 crc kubenswrapper[4869]: I0314 09:28:05.161344 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-d5tn6"] Mar 14 09:28:05 crc kubenswrapper[4869]: I0314 09:28:05.176000 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558002-d5tn6"] Mar 14 09:28:05 crc kubenswrapper[4869]: I0314 09:28:05.714853 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06766ec-aff5-4eb7-983b-bfa7fdd84b72" path="/var/lib/kubelet/pods/b06766ec-aff5-4eb7-983b-bfa7fdd84b72/volumes" Mar 14 09:28:11 crc kubenswrapper[4869]: I0314 09:28:11.704080 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:28:11 crc kubenswrapper[4869]: E0314 09:28:11.705030 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:28:13 crc kubenswrapper[4869]: I0314 09:28:13.704776 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:28:13 crc kubenswrapper[4869]: E0314 09:28:13.705305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:28:14 crc kubenswrapper[4869]: I0314 09:28:14.043606 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-d9nfj"] Mar 14 09:28:14 crc kubenswrapper[4869]: I0314 09:28:14.057973 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-69f0-account-create-update-pgvcz"] Mar 14 09:28:14 crc kubenswrapper[4869]: I0314 09:28:14.067104 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-d9nfj"] Mar 14 09:28:14 crc kubenswrapper[4869]: I0314 09:28:14.074753 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-69f0-account-create-update-pgvcz"] Mar 14 09:28:14 crc kubenswrapper[4869]: I0314 09:28:14.704526 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:28:14 crc kubenswrapper[4869]: E0314 09:28:14.704751 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.034593 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rcj56"] Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.043484 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d6d5-account-create-update-rd5gj"] Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.052316 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d6d5-account-create-update-rd5gj"] Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.061118 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rcj56"] Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.716279 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e29751-96de-4f9a-9756-6bde3535c6ee" path="/var/lib/kubelet/pods/33e29751-96de-4f9a-9756-6bde3535c6ee/volumes" Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.717008 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a191a24-a73d-4f29-b9b4-94ad8d78b4f4" path="/var/lib/kubelet/pods/3a191a24-a73d-4f29-b9b4-94ad8d78b4f4/volumes" Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.717765 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9d5689-4433-473b-9f9b-edd43281b328" path="/var/lib/kubelet/pods/cb9d5689-4433-473b-9f9b-edd43281b328/volumes" Mar 14 09:28:15 crc kubenswrapper[4869]: I0314 09:28:15.718397 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c24332-9232-4665-a910-640c344ea424" path="/var/lib/kubelet/pods/d9c24332-9232-4665-a910-640c344ea424/volumes" Mar 14 09:28:19 crc kubenswrapper[4869]: I0314 09:28:19.032186 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4l4rv"] Mar 14 09:28:19 crc kubenswrapper[4869]: I0314 09:28:19.041168 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4l4rv"] Mar 14 09:28:19 crc kubenswrapper[4869]: I0314 09:28:19.716949 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade47a1c-2503-406e-b29b-d2f0f6976541" path="/var/lib/kubelet/pods/ade47a1c-2503-406e-b29b-d2f0f6976541/volumes" Mar 14 09:28:24 crc kubenswrapper[4869]: I0314 09:28:24.704599 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:28:24 crc kubenswrapper[4869]: E0314 09:28:24.705349 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:28:26 crc kubenswrapper[4869]: I0314 09:28:26.704453 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:28:26 crc kubenswrapper[4869]: E0314 09:28:26.705040 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:28:29 crc kubenswrapper[4869]: I0314 09:28:29.706177 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:28:29 crc kubenswrapper[4869]: E0314 09:28:29.708245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:28:35 crc kubenswrapper[4869]: I0314 09:28:35.705590 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:28:35 crc kubenswrapper[4869]: E0314 09:28:35.706959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:28:38 crc kubenswrapper[4869]: I0314 09:28:38.703971 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:28:38 crc kubenswrapper[4869]: E0314 09:28:38.704677 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.089014 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-j8jwz"] Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.099953 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4b2jl"] Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.113538 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4b2jl"] Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.126270 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-j8jwz"] Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.704406 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:28:43 crc kubenswrapper[4869]: E0314 09:28:43.704821 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.720790 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6ece88-630b-4388-a67a-7356b8f3812e" path="/var/lib/kubelet/pods/4c6ece88-630b-4388-a67a-7356b8f3812e/volumes" Mar 14 09:28:43 crc kubenswrapper[4869]: I0314 09:28:43.722376 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e8c4d6-376f-4130-8057-06519abb646a" path="/var/lib/kubelet/pods/79e8c4d6-376f-4130-8057-06519abb646a/volumes" Mar 14 09:28:48 crc kubenswrapper[4869]: I0314 09:28:48.704571 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:28:48 crc kubenswrapper[4869]: E0314 09:28:48.705328 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:28:49 crc kubenswrapper[4869]: I0314 09:28:49.703989 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:28:49 crc kubenswrapper[4869]: E0314 09:28:49.704698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.035667 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pdmtz"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.047675 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67a4-account-create-update-xzh65"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.057977 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xvbcd"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.067176 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2a10-account-create-update-fkpqt"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.075374 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-050d-account-create-update-hfstd"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.083643 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67a4-account-create-update-xzh65"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.091687 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2a10-account-create-update-fkpqt"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.100877 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pdmtz"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.109355 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-050d-account-create-update-hfstd"] Mar 14 09:28:50 crc kubenswrapper[4869]: I0314 09:28:50.120038 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xvbcd"] Mar 14 09:28:51 crc kubenswrapper[4869]: I0314 09:28:51.718912 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c3104a-93f8-416c-b18c-c5434a205595" path="/var/lib/kubelet/pods/08c3104a-93f8-416c-b18c-c5434a205595/volumes" Mar 14 09:28:51 crc kubenswrapper[4869]: I0314 09:28:51.720636 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17eb787e-6879-4fde-896d-6d22cab6748e" path="/var/lib/kubelet/pods/17eb787e-6879-4fde-896d-6d22cab6748e/volumes" Mar 14 09:28:51 crc kubenswrapper[4869]: I0314 09:28:51.721929 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2" path="/var/lib/kubelet/pods/2b85fbfe-6d54-4ac0-b5fe-8b1a8db573c2/volumes" Mar 14 09:28:51 crc kubenswrapper[4869]: I0314 09:28:51.723083 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43206ff4-5f51-4c34-89af-92c875be15a7" path="/var/lib/kubelet/pods/43206ff4-5f51-4c34-89af-92c875be15a7/volumes" Mar 14 09:28:51 crc kubenswrapper[4869]: I0314 09:28:51.725417 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c088cd02-57e6-4c2b-b2bd-4eede2aa610e" path="/var/lib/kubelet/pods/c088cd02-57e6-4c2b-b2bd-4eede2aa610e/volumes" Mar 14 09:28:57 crc kubenswrapper[4869]: I0314 09:28:57.713698 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:28:57 crc kubenswrapper[4869]: E0314 09:28:57.714497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:00 crc kubenswrapper[4869]: I0314 09:29:00.704053 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:29:00 crc kubenswrapper[4869]: E0314 09:29:00.705054 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:29:01 crc kubenswrapper[4869]: I0314 09:29:01.704240 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:29:01 crc kubenswrapper[4869]: E0314 09:29:01.704600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:29:02 crc kubenswrapper[4869]: I0314 09:29:02.047500 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-2qcj2"] Mar 14 09:29:02 crc kubenswrapper[4869]: I0314 09:29:02.059215 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-2qcj2"] Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.139126 4869 scope.go:117] "RemoveContainer" containerID="4ab4198b7bffa9e702dc41d03b1020180b04c26f5151bdc3c0f13fb862589185" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.180800 4869 scope.go:117] "RemoveContainer" containerID="5a9d402087bc1888f2beebc6b3a722abb82b89c47f8f154e4db8ed56747f36ad" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.228060 4869 scope.go:117] "RemoveContainer" containerID="76911c237eb8e3cfa6942fb95a9b77f03de514d6ebcdf85b0a7d157bc1a4bfb1" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.280839 4869 scope.go:117] "RemoveContainer" containerID="093dd0a2dfb639d50904cc4945bdd277b3dd828c230d0e3e1a3ffe1900031422" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.324129 4869 scope.go:117] "RemoveContainer" containerID="d9a23c61653d81cff146499b3d2649592eac0bd90a5ddb742cbadb816a70c04e" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.374160 4869 scope.go:117] "RemoveContainer" containerID="15b264684b3a6ff3daed674a759454b5a2c1ebba769df0ffa2f82711eee80446" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.423451 4869 scope.go:117] "RemoveContainer" containerID="5462748d3bfe2dec44fb5f71e626cb824ef6520fcd36b3b733563732132e48af" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.449265 4869 scope.go:117] "RemoveContainer" containerID="0f26a3e5e60ee458256bbcf1ad03f7a873517ed74e9d67de3757d0dc94020638" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.479104 4869 scope.go:117] "RemoveContainer" containerID="0daa93c3f89f1ea99752bbdcc8fc060c862276ec13718484fb827eac8aa5a5b2" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.506345 4869 scope.go:117] "RemoveContainer" containerID="6ed0f21547372c6f7cf2117f677a8f498581d2cce37740671961fcdda12ad084" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.549754 4869 scope.go:117] "RemoveContainer" containerID="5a3506dae61d6e571f47db99d85098e8806910814b2371c422793b7bd361de35" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.574716 4869 scope.go:117] "RemoveContainer" containerID="31d7505207d3a4d1a7f3d0d645209173109c78f6ee3b80a9b0bff68706397b16" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.608671 4869 scope.go:117] "RemoveContainer" containerID="9d60e6790ad2f360f621c16f107ef758519d061b665b49adc82e4bcd2f372033" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.634072 4869 scope.go:117] "RemoveContainer" containerID="291d39c04c580c509b0b2f6589ab2a8a7d721b6dd563b65d94c5483addf518e6" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.654167 4869 scope.go:117] "RemoveContainer" containerID="bdf579559ac99043b74e0e6eeeb1462ddc3b6ccbfa7f3072aec013c9018fdc18" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.684681 4869 scope.go:117] "RemoveContainer" containerID="da2a11514e60d1c5cf9ee9f12bc072d4b5591d007f769a86f6cd1dbbb3ef87a8" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.719440 4869 scope.go:117] "RemoveContainer" containerID="3ee9531a5f9fbf93f8156f8bc75990c5094d8de53466253ccabee6c719cec9dd" Mar 14 09:29:03 crc kubenswrapper[4869]: I0314 09:29:03.721205 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1e6856-cc32-474e-8623-48629ef12382" path="/var/lib/kubelet/pods/3e1e6856-cc32-474e-8623-48629ef12382/volumes" Mar 14 09:29:04 crc kubenswrapper[4869]: I0314 09:29:04.033918 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-w9p7x"] Mar 14 09:29:04 crc kubenswrapper[4869]: I0314 09:29:04.045993 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-w9p7x"] Mar 14 09:29:05 crc kubenswrapper[4869]: I0314 09:29:05.724058 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4d107b-ac87-494b-8aeb-83d4488e934c" path="/var/lib/kubelet/pods/9c4d107b-ac87-494b-8aeb-83d4488e934c/volumes" Mar 14 09:29:10 crc kubenswrapper[4869]: I0314 09:29:10.704311 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:29:10 crc kubenswrapper[4869]: E0314 09:29:10.705120 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:13 crc kubenswrapper[4869]: I0314 09:29:13.704198 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:29:13 crc kubenswrapper[4869]: E0314 09:29:13.704913 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:29:15 crc kubenswrapper[4869]: I0314 09:29:15.704599 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:29:15 crc kubenswrapper[4869]: E0314 09:29:15.705271 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:29:24 crc kubenswrapper[4869]: I0314 09:29:24.705335 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:29:24 crc kubenswrapper[4869]: I0314 09:29:24.705961 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:29:24 crc kubenswrapper[4869]: E0314 09:29:24.706184 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:29:24 crc kubenswrapper[4869]: E0314 09:29:24.706240 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:26 crc kubenswrapper[4869]: I0314 09:29:26.704850 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:29:26 crc kubenswrapper[4869]: E0314 09:29:26.705379 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:29:35 crc kubenswrapper[4869]: I0314 09:29:35.706391 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:29:35 crc kubenswrapper[4869]: E0314 09:29:35.713220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:36 crc kubenswrapper[4869]: I0314 09:29:36.705088 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:29:36 crc kubenswrapper[4869]: E0314 09:29:36.705677 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:29:41 crc kubenswrapper[4869]: I0314 09:29:41.704633 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:29:42 crc kubenswrapper[4869]: I0314 09:29:42.660321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd"} Mar 14 09:29:47 crc kubenswrapper[4869]: I0314 09:29:47.713204 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:29:47 crc kubenswrapper[4869]: E0314 09:29:47.714077 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:49 crc kubenswrapper[4869]: I0314 09:29:49.703900 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:29:49 crc kubenswrapper[4869]: E0314 09:29:49.704452 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:29:58 crc kubenswrapper[4869]: I0314 09:29:58.060788 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t7bw5"] Mar 14 09:29:58 crc kubenswrapper[4869]: I0314 09:29:58.074932 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t7bw5"] Mar 14 09:29:58 crc kubenswrapper[4869]: I0314 09:29:58.704665 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:29:58 crc kubenswrapper[4869]: E0314 09:29:58.705716 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:29:59 crc kubenswrapper[4869]: I0314 09:29:59.716392 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba55bdd0-5e03-45de-820b-59194effebf1" path="/var/lib/kubelet/pods/ba55bdd0-5e03-45de-820b-59194effebf1/volumes" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.182141 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq"] Mar 14 09:30:00 crc kubenswrapper[4869]: E0314 09:30:00.182963 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c45ab1-4de5-46c0-92c7-46fd95f53f74" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.182982 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c45ab1-4de5-46c0-92c7-46fd95f53f74" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.183173 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c45ab1-4de5-46c0-92c7-46fd95f53f74" containerName="oc" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.184107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.186443 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.186489 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.203178 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558010-jkt9w"] Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.206937 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.210665 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.210864 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.211789 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.221851 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-jkt9w"] Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.230467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq"] Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.272076 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx62d\" (UniqueName: \"kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.272178 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.272216 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.272738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tcd\" (UniqueName: \"kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd\") pod \"auto-csr-approver-29558010-jkt9w\" (UID: \"6b270d28-1b86-4702-b593-61c411f3c21f\") " pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.374950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.375034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.375200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tcd\" (UniqueName: \"kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd\") pod \"auto-csr-approver-29558010-jkt9w\" (UID: \"6b270d28-1b86-4702-b593-61c411f3c21f\") " pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.375309 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx62d\" (UniqueName: \"kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.376070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.390287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.395130 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx62d\" (UniqueName: \"kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d\") pod \"collect-profiles-29558010-p6zmq\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.396475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tcd\" (UniqueName: \"kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd\") pod \"auto-csr-approver-29558010-jkt9w\" (UID: \"6b270d28-1b86-4702-b593-61c411f3c21f\") " pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.522055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.547340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:00 crc kubenswrapper[4869]: I0314 09:30:00.993796 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq"] Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.030949 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jxvhl"] Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.043587 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jxvhl"] Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.085944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-jkt9w"] Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.093828 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.720436 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e612c02e-1383-4a14-9267-e1742cb95cc7" path="/var/lib/kubelet/pods/e612c02e-1383-4a14-9267-e1742cb95cc7/volumes" Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.851220 4869 generic.go:334] "Generic (PLEG): container finished" podID="eb653d00-0e56-459f-aef8-976660ca7c22" containerID="d10ebe7185df19b82ea3088f72a7cb37ca9f0c0bc039e6200f28844e31484b53" exitCode=0 Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.851296 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" event={"ID":"eb653d00-0e56-459f-aef8-976660ca7c22","Type":"ContainerDied","Data":"d10ebe7185df19b82ea3088f72a7cb37ca9f0c0bc039e6200f28844e31484b53"} Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.851326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" event={"ID":"eb653d00-0e56-459f-aef8-976660ca7c22","Type":"ContainerStarted","Data":"50cee2495b51335f5823ede0f930481930e92eacf742130195b4223fd8f37acd"} Mar 14 09:30:01 crc kubenswrapper[4869]: I0314 09:30:01.853032 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" event={"ID":"6b270d28-1b86-4702-b593-61c411f3c21f","Type":"ContainerStarted","Data":"76d9d63eaabfe6de04cfa7a2d78a51b868157014bde6324a73e4a69fc244a815"} Mar 14 09:30:02 crc kubenswrapper[4869]: I0314 09:30:02.703649 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:30:02 crc kubenswrapper[4869]: E0314 09:30:02.704108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.240672 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.344083 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume\") pod \"eb653d00-0e56-459f-aef8-976660ca7c22\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.346839 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume\") pod \"eb653d00-0e56-459f-aef8-976660ca7c22\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.347040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx62d\" (UniqueName: \"kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d\") pod \"eb653d00-0e56-459f-aef8-976660ca7c22\" (UID: \"eb653d00-0e56-459f-aef8-976660ca7c22\") " Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.347788 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb653d00-0e56-459f-aef8-976660ca7c22" (UID: "eb653d00-0e56-459f-aef8-976660ca7c22"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.348063 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb653d00-0e56-459f-aef8-976660ca7c22-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.354393 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d" (OuterVolumeSpecName: "kube-api-access-gx62d") pod "eb653d00-0e56-459f-aef8-976660ca7c22" (UID: "eb653d00-0e56-459f-aef8-976660ca7c22"). InnerVolumeSpecName "kube-api-access-gx62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.355140 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb653d00-0e56-459f-aef8-976660ca7c22" (UID: "eb653d00-0e56-459f-aef8-976660ca7c22"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.451158 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx62d\" (UniqueName: \"kubernetes.io/projected/eb653d00-0e56-459f-aef8-976660ca7c22-kube-api-access-gx62d\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.451221 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb653d00-0e56-459f-aef8-976660ca7c22-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.877001 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.876982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq" event={"ID":"eb653d00-0e56-459f-aef8-976660ca7c22","Type":"ContainerDied","Data":"50cee2495b51335f5823ede0f930481930e92eacf742130195b4223fd8f37acd"} Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.877095 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50cee2495b51335f5823ede0f930481930e92eacf742130195b4223fd8f37acd" Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.880891 4869 generic.go:334] "Generic (PLEG): container finished" podID="6b270d28-1b86-4702-b593-61c411f3c21f" containerID="929fbe70504a4886fd26e8504dab716d300ba97ea5c16169cb6c4f76b69fd8df" exitCode=0 Mar 14 09:30:03 crc kubenswrapper[4869]: I0314 09:30:03.880982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" event={"ID":"6b270d28-1b86-4702-b593-61c411f3c21f","Type":"ContainerDied","Data":"929fbe70504a4886fd26e8504dab716d300ba97ea5c16169cb6c4f76b69fd8df"} Mar 14 09:30:04 crc kubenswrapper[4869]: I0314 09:30:04.069617 4869 scope.go:117] "RemoveContainer" containerID="40ba9f6148f7aaac7f94702a995b632193b84b96097df26f9f59e7d37b78357b" Mar 14 09:30:04 crc kubenswrapper[4869]: I0314 09:30:04.120866 4869 scope.go:117] "RemoveContainer" containerID="86194f5de3036d3e093328210a69d11e4fdddf180f9c600e3c0de4e4d14e9d0f" Mar 14 09:30:04 crc kubenswrapper[4869]: I0314 09:30:04.157053 4869 scope.go:117] "RemoveContainer" containerID="666c3fcdee0f3d60ce35f3dd71b484228a4d2c0c4b433f74a33e9da8a140605f" Mar 14 09:30:04 crc kubenswrapper[4869]: I0314 09:30:04.221842 4869 scope.go:117] "RemoveContainer" containerID="da7770e67bcf3029acd2cb3eaf0e1a168134f0c11abda151170a307b68b36548" Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.262278 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.395316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2tcd\" (UniqueName: \"kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd\") pod \"6b270d28-1b86-4702-b593-61c411f3c21f\" (UID: \"6b270d28-1b86-4702-b593-61c411f3c21f\") " Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.404013 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd" (OuterVolumeSpecName: "kube-api-access-n2tcd") pod "6b270d28-1b86-4702-b593-61c411f3c21f" (UID: "6b270d28-1b86-4702-b593-61c411f3c21f"). InnerVolumeSpecName "kube-api-access-n2tcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.497762 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2tcd\" (UniqueName: \"kubernetes.io/projected/6b270d28-1b86-4702-b593-61c411f3c21f-kube-api-access-n2tcd\") on node \"crc\" DevicePath \"\"" Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.904390 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" event={"ID":"6b270d28-1b86-4702-b593-61c411f3c21f","Type":"ContainerDied","Data":"76d9d63eaabfe6de04cfa7a2d78a51b868157014bde6324a73e4a69fc244a815"} Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.904454 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d9d63eaabfe6de04cfa7a2d78a51b868157014bde6324a73e4a69fc244a815" Mar 14 09:30:05 crc kubenswrapper[4869]: I0314 09:30:05.904533 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558010-jkt9w" Mar 14 09:30:06 crc kubenswrapper[4869]: I0314 09:30:06.326165 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-w6wsr"] Mar 14 09:30:06 crc kubenswrapper[4869]: I0314 09:30:06.333783 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558004-w6wsr"] Mar 14 09:30:07 crc kubenswrapper[4869]: I0314 09:30:07.719692 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727814cc-0540-4db5-8a75-690ae32817da" path="/var/lib/kubelet/pods/727814cc-0540-4db5-8a75-690ae32817da/volumes" Mar 14 09:30:11 crc kubenswrapper[4869]: I0314 09:30:11.703658 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:30:11 crc kubenswrapper[4869]: E0314 09:30:11.704861 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:30:14 crc kubenswrapper[4869]: I0314 09:30:14.057192 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kjlv8"] Mar 14 09:30:14 crc kubenswrapper[4869]: I0314 09:30:14.067244 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kjlv8"] Mar 14 09:30:15 crc kubenswrapper[4869]: I0314 09:30:15.718162 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34747e66-40bd-4676-9d8e-673fb09120c0" path="/var/lib/kubelet/pods/34747e66-40bd-4676-9d8e-673fb09120c0/volumes" Mar 14 09:30:16 crc kubenswrapper[4869]: I0314 09:30:16.703857 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:30:16 crc kubenswrapper[4869]: E0314 09:30:16.704409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:30:26 crc kubenswrapper[4869]: I0314 09:30:26.703886 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:30:26 crc kubenswrapper[4869]: E0314 09:30:26.704671 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:30:27 crc kubenswrapper[4869]: I0314 09:30:27.044402 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jmtdx"] Mar 14 09:30:27 crc kubenswrapper[4869]: I0314 09:30:27.055130 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jmtdx"] Mar 14 09:30:27 crc kubenswrapper[4869]: I0314 09:30:27.712836 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:30:27 crc kubenswrapper[4869]: E0314 09:30:27.713125 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:30:27 crc kubenswrapper[4869]: I0314 09:30:27.715058 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5806f1f4-83ae-4f76-ba42-f4943cbef129" path="/var/lib/kubelet/pods/5806f1f4-83ae-4f76-ba42-f4943cbef129/volumes" Mar 14 09:30:37 crc kubenswrapper[4869]: I0314 09:30:37.044993 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6sb7f"] Mar 14 09:30:37 crc kubenswrapper[4869]: I0314 09:30:37.057272 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6sb7f"] Mar 14 09:30:37 crc kubenswrapper[4869]: I0314 09:30:37.720015 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eda9c72-2272-45c8-b843-1c2b3c27f709" path="/var/lib/kubelet/pods/8eda9c72-2272-45c8-b843-1c2b3c27f709/volumes" Mar 14 09:30:40 crc kubenswrapper[4869]: I0314 09:30:40.705382 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:30:40 crc kubenswrapper[4869]: E0314 09:30:40.706220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:30:41 crc kubenswrapper[4869]: I0314 09:30:41.704774 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:30:41 crc kubenswrapper[4869]: E0314 09:30:41.705361 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:30:52 crc kubenswrapper[4869]: I0314 09:30:52.703877 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:30:52 crc kubenswrapper[4869]: E0314 09:30:52.704843 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:30:56 crc kubenswrapper[4869]: I0314 09:30:56.704349 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:30:56 crc kubenswrapper[4869]: E0314 09:30:56.705368 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:31:03 crc kubenswrapper[4869]: I0314 09:31:03.703705 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:31:03 crc kubenswrapper[4869]: E0314 09:31:03.704326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:31:04 crc kubenswrapper[4869]: I0314 09:31:04.366174 4869 scope.go:117] "RemoveContainer" containerID="16f66cc143b425762b7476c0fbcc17d5bb966de3b31c9ed3a53bff59927136da" Mar 14 09:31:04 crc kubenswrapper[4869]: I0314 09:31:04.413556 4869 scope.go:117] "RemoveContainer" containerID="b227489196ee453d04f492511a870f0d07f537918a1944c03cf846716041d934" Mar 14 09:31:04 crc kubenswrapper[4869]: I0314 09:31:04.462191 4869 scope.go:117] "RemoveContainer" containerID="63afeaed1a472f127b732df459b006e941361f28145823c009d0f9d940099676" Mar 14 09:31:04 crc kubenswrapper[4869]: I0314 09:31:04.535750 4869 scope.go:117] "RemoveContainer" containerID="2883e73047c3225f8d8ef5c13a0957ea8e2bfba859c6682072bb40421f2cb6f3" Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.042079 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2437-account-create-update-qbr82"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.053013 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x7vlw"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.062367 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-638c-account-create-update-c76x8"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.073861 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ld855"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.084349 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c3ab-account-create-update-bscnd"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.093338 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2437-account-create-update-qbr82"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.100743 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c3ab-account-create-update-bscnd"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.109608 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ld855"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.118916 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-638c-account-create-update-c76x8"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.129571 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x7vlw"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.136730 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4s89j"] Mar 14 09:31:08 crc kubenswrapper[4869]: I0314 09:31:08.144342 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4s89j"] Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.705010 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:31:09 crc kubenswrapper[4869]: E0314 09:31:09.705637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.727237 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0263a6bb-e3ac-4eff-9021-c82a555ae52b" path="/var/lib/kubelet/pods/0263a6bb-e3ac-4eff-9021-c82a555ae52b/volumes" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.728077 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c679d2d-1e39-47a5-b4cf-dba3430a25d9" path="/var/lib/kubelet/pods/0c679d2d-1e39-47a5-b4cf-dba3430a25d9/volumes" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.728926 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a951e5-a6d1-4a1c-88ba-ed578c547d55" path="/var/lib/kubelet/pods/47a951e5-a6d1-4a1c-88ba-ed578c547d55/volumes" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.729700 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a73e307-e4ba-4102-b4d6-33897be89646" path="/var/lib/kubelet/pods/5a73e307-e4ba-4102-b4d6-33897be89646/volumes" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.732042 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdc2944-fc75-4309-a83f-3a3087099231" path="/var/lib/kubelet/pods/8bdc2944-fc75-4309-a83f-3a3087099231/volumes" Mar 14 09:31:09 crc kubenswrapper[4869]: I0314 09:31:09.733048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7abd39-848f-41f5-9064-6219922e9684" path="/var/lib/kubelet/pods/ab7abd39-848f-41f5-9064-6219922e9684/volumes" Mar 14 09:31:16 crc kubenswrapper[4869]: I0314 09:31:16.704454 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:31:16 crc kubenswrapper[4869]: E0314 09:31:16.705200 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:31:21 crc kubenswrapper[4869]: I0314 09:31:21.704907 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:31:21 crc kubenswrapper[4869]: E0314 09:31:21.705993 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:31:31 crc kubenswrapper[4869]: I0314 09:31:31.703651 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:31:31 crc kubenswrapper[4869]: E0314 09:31:31.704397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:31:35 crc kubenswrapper[4869]: I0314 09:31:35.704115 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:31:36 crc kubenswrapper[4869]: I0314 09:31:36.797425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970"} Mar 14 09:31:42 crc kubenswrapper[4869]: I0314 09:31:42.704174 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:31:42 crc kubenswrapper[4869]: E0314 09:31:42.704826 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:31:43 crc kubenswrapper[4869]: I0314 09:31:43.045834 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ft85m"] Mar 14 09:31:43 crc kubenswrapper[4869]: I0314 09:31:43.055537 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ft85m"] Mar 14 09:31:43 crc kubenswrapper[4869]: I0314 09:31:43.715478 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa348286-4ed9-4e11-8b48-6999c63429f6" path="/var/lib/kubelet/pods/fa348286-4ed9-4e11-8b48-6999c63429f6/volumes" Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.538710 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.539086 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.880457 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" exitCode=1 Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.880526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970"} Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.880578 4869 scope.go:117] "RemoveContainer" containerID="9493af3d1e89a5c3c7d7a55ab9a741403a14744f55681297a686ffb13eed7a3a" Mar 14 09:31:44 crc kubenswrapper[4869]: I0314 09:31:44.881432 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:31:44 crc kubenswrapper[4869]: E0314 09:31:44.881704 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:31:54 crc kubenswrapper[4869]: I0314 09:31:54.539625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:31:54 crc kubenswrapper[4869]: I0314 09:31:54.540216 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:31:54 crc kubenswrapper[4869]: I0314 09:31:54.541016 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:31:54 crc kubenswrapper[4869]: E0314 09:31:54.541220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:31:55 crc kubenswrapper[4869]: I0314 09:31:55.704841 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:31:55 crc kubenswrapper[4869]: I0314 09:31:55.989470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e"} Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.142809 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558012-6flkj"] Mar 14 09:32:00 crc kubenswrapper[4869]: E0314 09:32:00.143868 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb653d00-0e56-459f-aef8-976660ca7c22" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.143890 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb653d00-0e56-459f-aef8-976660ca7c22" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4869]: E0314 09:32:00.143904 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b270d28-1b86-4702-b593-61c411f3c21f" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.143913 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b270d28-1b86-4702-b593-61c411f3c21f" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.144174 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b270d28-1b86-4702-b593-61c411f3c21f" containerName="oc" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.144195 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb653d00-0e56-459f-aef8-976660ca7c22" containerName="collect-profiles" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.145041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.147231 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.147645 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.152879 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.154056 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-6flkj"] Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.189462 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qch\" (UniqueName: \"kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch\") pod \"auto-csr-approver-29558012-6flkj\" (UID: \"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c\") " pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.292536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qch\" (UniqueName: \"kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch\") pod \"auto-csr-approver-29558012-6flkj\" (UID: \"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c\") " pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.324483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qch\" (UniqueName: \"kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch\") pod \"auto-csr-approver-29558012-6flkj\" (UID: \"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c\") " pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.468707 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:00 crc kubenswrapper[4869]: I0314 09:32:00.937901 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-6flkj"] Mar 14 09:32:01 crc kubenswrapper[4869]: I0314 09:32:01.032031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-6flkj" event={"ID":"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c","Type":"ContainerStarted","Data":"3fb815064cb950c3a2b2a423a60c2db91a95f87745191d51b17b237957786b0f"} Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.404876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.405583 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.669862 4869 scope.go:117] "RemoveContainer" containerID="964c988c7c0652c1bf202bcae8a36d8f0c7057e28276e7f14b75ff217e7b1e02" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.768566 4869 scope.go:117] "RemoveContainer" containerID="a4e94d05ebee941a175ce63cc676295b49ac9994b26437e9a468730c00253bfc" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.815223 4869 scope.go:117] "RemoveContainer" containerID="00e8fee75352381b15e48529901c7785caf83fc8989967a1c4adde529fd89fbc" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.880874 4869 scope.go:117] "RemoveContainer" containerID="08c047c85b1a4aa9ad4956925b6865625b3aa5872ab709dec24957fd67464a75" Mar 14 09:32:04 crc kubenswrapper[4869]: I0314 09:32:04.943609 4869 scope.go:117] "RemoveContainer" containerID="3933c0b08061ccd9547cc68e10e1e7c2fd62007fb8f3095fdfb6ea0ef8673d0d" Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.001772 4869 scope.go:117] "RemoveContainer" containerID="b9854c741b5d7d00c6db01564efaac34ca0749b10b3c8156c7c588e5b187cda7" Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.044703 4869 scope.go:117] "RemoveContainer" containerID="f71f10f72033c3aa58c92eb1143bcbca8cde4ed0d452b4a5e2a6dece3554d724" Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.088382 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" exitCode=1 Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.088442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e"} Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.088475 4869 scope.go:117] "RemoveContainer" containerID="2bcc1302b11a014db3d04413675c0f2b38932495274e9ff48fef013a668accbd" Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.089348 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:32:05 crc kubenswrapper[4869]: E0314 09:32:05.089637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:32:05 crc kubenswrapper[4869]: I0314 09:32:05.703761 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:32:05 crc kubenswrapper[4869]: E0314 09:32:05.704080 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:32:06 crc kubenswrapper[4869]: I0314 09:32:06.114192 4869 generic.go:334] "Generic (PLEG): container finished" podID="8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" containerID="5b571477932a16c70268eb0a1e629653f7d2ae0f050acb002279f80844537b0a" exitCode=0 Mar 14 09:32:06 crc kubenswrapper[4869]: I0314 09:32:06.114292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-6flkj" event={"ID":"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c","Type":"ContainerDied","Data":"5b571477932a16c70268eb0a1e629653f7d2ae0f050acb002279f80844537b0a"} Mar 14 09:32:07 crc kubenswrapper[4869]: I0314 09:32:07.492761 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:07 crc kubenswrapper[4869]: I0314 09:32:07.570358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qch\" (UniqueName: \"kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch\") pod \"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c\" (UID: \"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c\") " Mar 14 09:32:07 crc kubenswrapper[4869]: I0314 09:32:07.576153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch" (OuterVolumeSpecName: "kube-api-access-h9qch") pod "8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" (UID: "8a1d7dbb-6c49-4285-ba32-6f7ec559b77c"). InnerVolumeSpecName "kube-api-access-h9qch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:32:07 crc kubenswrapper[4869]: I0314 09:32:07.672493 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qch\" (UniqueName: \"kubernetes.io/projected/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c-kube-api-access-h9qch\") on node \"crc\" DevicePath \"\"" Mar 14 09:32:07 crc kubenswrapper[4869]: E0314 09:32:07.907008 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a1d7dbb_6c49_4285_ba32_6f7ec559b77c.slice/crio-3fb815064cb950c3a2b2a423a60c2db91a95f87745191d51b17b237957786b0f\": RecentStats: unable to find data in memory cache]" Mar 14 09:32:08 crc kubenswrapper[4869]: I0314 09:32:08.141466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558012-6flkj" event={"ID":"8a1d7dbb-6c49-4285-ba32-6f7ec559b77c","Type":"ContainerDied","Data":"3fb815064cb950c3a2b2a423a60c2db91a95f87745191d51b17b237957786b0f"} Mar 14 09:32:08 crc kubenswrapper[4869]: I0314 09:32:08.141845 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb815064cb950c3a2b2a423a60c2db91a95f87745191d51b17b237957786b0f" Mar 14 09:32:08 crc kubenswrapper[4869]: I0314 09:32:08.141586 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558012-6flkj" Mar 14 09:32:08 crc kubenswrapper[4869]: I0314 09:32:08.557075 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-nqpnr"] Mar 14 09:32:08 crc kubenswrapper[4869]: I0314 09:32:08.571786 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558006-nqpnr"] Mar 14 09:32:09 crc kubenswrapper[4869]: I0314 09:32:09.605344 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:32:09 crc kubenswrapper[4869]: I0314 09:32:09.605670 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:32:09 crc kubenswrapper[4869]: I0314 09:32:09.725394 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a586e9-6573-4c52-9036-27ca2d2dac17" path="/var/lib/kubelet/pods/11a586e9-6573-4c52-9036-27ca2d2dac17/volumes" Mar 14 09:32:14 crc kubenswrapper[4869]: I0314 09:32:14.404443 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:32:14 crc kubenswrapper[4869]: I0314 09:32:14.405140 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:32:14 crc kubenswrapper[4869]: I0314 09:32:14.406151 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:32:14 crc kubenswrapper[4869]: E0314 09:32:14.406435 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:32:18 crc kubenswrapper[4869]: I0314 09:32:18.704317 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:32:18 crc kubenswrapper[4869]: E0314 09:32:18.705253 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:32:27 crc kubenswrapper[4869]: I0314 09:32:27.710952 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:32:27 crc kubenswrapper[4869]: E0314 09:32:27.712259 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:32:30 crc kubenswrapper[4869]: I0314 09:32:30.704151 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:32:30 crc kubenswrapper[4869]: E0314 09:32:30.704924 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:32:39 crc kubenswrapper[4869]: I0314 09:32:39.604972 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:32:39 crc kubenswrapper[4869]: I0314 09:32:39.605552 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:32:40 crc kubenswrapper[4869]: I0314 09:32:40.704786 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:32:40 crc kubenswrapper[4869]: E0314 09:32:40.705436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:32:41 crc kubenswrapper[4869]: I0314 09:32:41.040806 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xnjqv"] Mar 14 09:32:41 crc kubenswrapper[4869]: I0314 09:32:41.052267 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xnjqv"] Mar 14 09:32:41 crc kubenswrapper[4869]: I0314 09:32:41.705025 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:32:41 crc kubenswrapper[4869]: E0314 09:32:41.705459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:32:41 crc kubenswrapper[4869]: I0314 09:32:41.718403 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c5dd4a-369c-43a8-9d96-b67997800a45" path="/var/lib/kubelet/pods/31c5dd4a-369c-43a8-9d96-b67997800a45/volumes" Mar 14 09:32:51 crc kubenswrapper[4869]: I0314 09:32:51.706466 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:32:51 crc kubenswrapper[4869]: E0314 09:32:51.709153 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:32:52 crc kubenswrapper[4869]: I0314 09:32:52.704352 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:32:52 crc kubenswrapper[4869]: E0314 09:32:52.704892 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:32:54 crc kubenswrapper[4869]: I0314 09:32:54.054389 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k8p4c"] Mar 14 09:32:54 crc kubenswrapper[4869]: I0314 09:32:54.065302 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-k8p4c"] Mar 14 09:32:55 crc kubenswrapper[4869]: I0314 09:32:55.716315 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868be304-0fd3-401b-8f0d-c1997da82c45" path="/var/lib/kubelet/pods/868be304-0fd3-401b-8f0d-c1997da82c45/volumes" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.483726 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:02 crc kubenswrapper[4869]: E0314 09:33:02.484786 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" containerName="oc" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.484801 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" containerName="oc" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.484986 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" containerName="oc" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.486450 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.500606 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.589242 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.589699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.589830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnxr\" (UniqueName: \"kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.691855 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxnxr\" (UniqueName: \"kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.692176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.692294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.693014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.693344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.722153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxnxr\" (UniqueName: \"kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr\") pod \"redhat-operators-kszpj\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:02 crc kubenswrapper[4869]: I0314 09:33:02.854607 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:03 crc kubenswrapper[4869]: I0314 09:33:03.340403 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:03 crc kubenswrapper[4869]: W0314 09:33:03.346664 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3540b0f_9756_4b7b_8630_80dc0fc11064.slice/crio-98ccc6d6bbdbf0afd6dfff6b63ae6757a206b0609b4c8787e782d9444db43afe WatchSource:0}: Error finding container 98ccc6d6bbdbf0afd6dfff6b63ae6757a206b0609b4c8787e782d9444db43afe: Status 404 returned error can't find the container with id 98ccc6d6bbdbf0afd6dfff6b63ae6757a206b0609b4c8787e782d9444db43afe Mar 14 09:33:03 crc kubenswrapper[4869]: I0314 09:33:03.727463 4869 generic.go:334] "Generic (PLEG): container finished" podID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerID="7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783" exitCode=0 Mar 14 09:33:03 crc kubenswrapper[4869]: I0314 09:33:03.727759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerDied","Data":"7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783"} Mar 14 09:33:03 crc kubenswrapper[4869]: I0314 09:33:03.727789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerStarted","Data":"98ccc6d6bbdbf0afd6dfff6b63ae6757a206b0609b4c8787e782d9444db43afe"} Mar 14 09:33:04 crc kubenswrapper[4869]: I0314 09:33:04.704783 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:33:04 crc kubenswrapper[4869]: E0314 09:33:04.705368 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:33:04 crc kubenswrapper[4869]: I0314 09:33:04.752822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerStarted","Data":"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82"} Mar 14 09:33:05 crc kubenswrapper[4869]: I0314 09:33:05.440611 4869 scope.go:117] "RemoveContainer" containerID="2f43876528e28de0b50e043a4ec6e9fbb8e890e50ef96f7dec215876b2468a59" Mar 14 09:33:05 crc kubenswrapper[4869]: I0314 09:33:05.536282 4869 scope.go:117] "RemoveContainer" containerID="0fd11c30969a4181b257eb1ca7ccfc35a87f6f858eaa0821edd30f27cb9e9e12" Mar 14 09:33:05 crc kubenswrapper[4869]: I0314 09:33:05.578766 4869 scope.go:117] "RemoveContainer" containerID="749185c05dc94aae5344dff36d22a508d7caaad054e7ab0dd431e070005dd822" Mar 14 09:33:06 crc kubenswrapper[4869]: I0314 09:33:06.704155 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:33:06 crc kubenswrapper[4869]: E0314 09:33:06.704915 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:33:07 crc kubenswrapper[4869]: I0314 09:33:07.783139 4869 generic.go:334] "Generic (PLEG): container finished" podID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerID="926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82" exitCode=0 Mar 14 09:33:07 crc kubenswrapper[4869]: I0314 09:33:07.783216 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerDied","Data":"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82"} Mar 14 09:33:08 crc kubenswrapper[4869]: I0314 09:33:08.794570 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerStarted","Data":"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1"} Mar 14 09:33:08 crc kubenswrapper[4869]: I0314 09:33:08.835378 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kszpj" podStartSLOduration=2.300197762 podStartE2EDuration="6.835353471s" podCreationTimestamp="2026-03-14 09:33:02 +0000 UTC" firstStartedPulling="2026-03-14 09:33:03.73047672 +0000 UTC m=+2136.702758773" lastFinishedPulling="2026-03-14 09:33:08.265632429 +0000 UTC m=+2141.237914482" observedRunningTime="2026-03-14 09:33:08.816762175 +0000 UTC m=+2141.789044268" watchObservedRunningTime="2026-03-14 09:33:08.835353471 +0000 UTC m=+2141.807635534" Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.605783 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.605868 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.605942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.607048 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.607111 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd" gracePeriod=600 Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.805407 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd" exitCode=0 Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.805451 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd"} Mar 14 09:33:09 crc kubenswrapper[4869]: I0314 09:33:09.805484 4869 scope.go:117] "RemoveContainer" containerID="27c5f122c63bb923449f02bdafd2b406c9c7afb35b3301dec639215952941311" Mar 14 09:33:10 crc kubenswrapper[4869]: I0314 09:33:10.816825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd"} Mar 14 09:33:12 crc kubenswrapper[4869]: I0314 09:33:12.855448 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:12 crc kubenswrapper[4869]: I0314 09:33:12.855828 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:13 crc kubenswrapper[4869]: I0314 09:33:13.904684 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kszpj" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="registry-server" probeResult="failure" output=< Mar 14 09:33:13 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:33:13 crc kubenswrapper[4869]: > Mar 14 09:33:17 crc kubenswrapper[4869]: I0314 09:33:17.709986 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:33:17 crc kubenswrapper[4869]: E0314 09:33:17.710720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:33:18 crc kubenswrapper[4869]: I0314 09:33:18.704688 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:33:18 crc kubenswrapper[4869]: E0314 09:33:18.705279 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:33:22 crc kubenswrapper[4869]: I0314 09:33:22.936703 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:22 crc kubenswrapper[4869]: I0314 09:33:22.990954 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:24 crc kubenswrapper[4869]: I0314 09:33:24.264347 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:24 crc kubenswrapper[4869]: I0314 09:33:24.957759 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kszpj" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="registry-server" containerID="cri-o://2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1" gracePeriod=2 Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.435814 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.582347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content\") pod \"e3540b0f-9756-4b7b-8630-80dc0fc11064\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.582747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities\") pod \"e3540b0f-9756-4b7b-8630-80dc0fc11064\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.582819 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxnxr\" (UniqueName: \"kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr\") pod \"e3540b0f-9756-4b7b-8630-80dc0fc11064\" (UID: \"e3540b0f-9756-4b7b-8630-80dc0fc11064\") " Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.584124 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities" (OuterVolumeSpecName: "utilities") pod "e3540b0f-9756-4b7b-8630-80dc0fc11064" (UID: "e3540b0f-9756-4b7b-8630-80dc0fc11064"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.589101 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr" (OuterVolumeSpecName: "kube-api-access-wxnxr") pod "e3540b0f-9756-4b7b-8630-80dc0fc11064" (UID: "e3540b0f-9756-4b7b-8630-80dc0fc11064"). InnerVolumeSpecName "kube-api-access-wxnxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.686378 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.686410 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxnxr\" (UniqueName: \"kubernetes.io/projected/e3540b0f-9756-4b7b-8630-80dc0fc11064-kube-api-access-wxnxr\") on node \"crc\" DevicePath \"\"" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.748759 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3540b0f-9756-4b7b-8630-80dc0fc11064" (UID: "e3540b0f-9756-4b7b-8630-80dc0fc11064"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.788480 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3540b0f-9756-4b7b-8630-80dc0fc11064-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.968721 4869 generic.go:334] "Generic (PLEG): container finished" podID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerID="2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1" exitCode=0 Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.968766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerDied","Data":"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1"} Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.968798 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kszpj" Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.968832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kszpj" event={"ID":"e3540b0f-9756-4b7b-8630-80dc0fc11064","Type":"ContainerDied","Data":"98ccc6d6bbdbf0afd6dfff6b63ae6757a206b0609b4c8787e782d9444db43afe"} Mar 14 09:33:25 crc kubenswrapper[4869]: I0314 09:33:25.968862 4869 scope.go:117] "RemoveContainer" containerID="2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.004396 4869 scope.go:117] "RemoveContainer" containerID="926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.007925 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.020980 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kszpj"] Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.038101 4869 scope.go:117] "RemoveContainer" containerID="7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.068135 4869 scope.go:117] "RemoveContainer" containerID="2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1" Mar 14 09:33:26 crc kubenswrapper[4869]: E0314 09:33:26.068590 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1\": container with ID starting with 2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1 not found: ID does not exist" containerID="2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.068622 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1"} err="failed to get container status \"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1\": rpc error: code = NotFound desc = could not find container \"2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1\": container with ID starting with 2c2136e42fd18f6d5635ac53e57480eb0de2eb1b9ddec94da5f11cfb8f45a1f1 not found: ID does not exist" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.068644 4869 scope.go:117] "RemoveContainer" containerID="926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82" Mar 14 09:33:26 crc kubenswrapper[4869]: E0314 09:33:26.068871 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82\": container with ID starting with 926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82 not found: ID does not exist" containerID="926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.068932 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82"} err="failed to get container status \"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82\": rpc error: code = NotFound desc = could not find container \"926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82\": container with ID starting with 926008b8dd6f83cfa191de13524ccdfad9bf23009493881c312f61e753778d82 not found: ID does not exist" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.068950 4869 scope.go:117] "RemoveContainer" containerID="7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783" Mar 14 09:33:26 crc kubenswrapper[4869]: E0314 09:33:26.069434 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783\": container with ID starting with 7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783 not found: ID does not exist" containerID="7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783" Mar 14 09:33:26 crc kubenswrapper[4869]: I0314 09:33:26.069459 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783"} err="failed to get container status \"7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783\": rpc error: code = NotFound desc = could not find container \"7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783\": container with ID starting with 7da8328f347d7dc76fbaf20142dc3e05f782e62dd668b5444bbd97787ad52783 not found: ID does not exist" Mar 14 09:33:27 crc kubenswrapper[4869]: I0314 09:33:27.720828 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" path="/var/lib/kubelet/pods/e3540b0f-9756-4b7b-8630-80dc0fc11064/volumes" Mar 14 09:33:31 crc kubenswrapper[4869]: I0314 09:33:31.048845 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjwxx"] Mar 14 09:33:31 crc kubenswrapper[4869]: I0314 09:33:31.057471 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xjwxx"] Mar 14 09:33:31 crc kubenswrapper[4869]: I0314 09:33:31.704802 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:33:31 crc kubenswrapper[4869]: I0314 09:33:31.704874 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:33:31 crc kubenswrapper[4869]: E0314 09:33:31.705062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:33:31 crc kubenswrapper[4869]: E0314 09:33:31.705066 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:33:31 crc kubenswrapper[4869]: I0314 09:33:31.714595 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cf5e02-e6ac-4c54-a514-948485fd56fb" path="/var/lib/kubelet/pods/c0cf5e02-e6ac-4c54-a514-948485fd56fb/volumes" Mar 14 09:33:42 crc kubenswrapper[4869]: I0314 09:33:42.704303 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:33:42 crc kubenswrapper[4869]: E0314 09:33:42.704924 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:33:46 crc kubenswrapper[4869]: I0314 09:33:46.704416 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:33:46 crc kubenswrapper[4869]: E0314 09:33:46.705302 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.554677 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:33:47 crc kubenswrapper[4869]: E0314 09:33:47.555166 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="extract-content" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.555191 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="extract-content" Mar 14 09:33:47 crc kubenswrapper[4869]: E0314 09:33:47.555215 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="registry-server" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.555224 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="registry-server" Mar 14 09:33:47 crc kubenswrapper[4869]: E0314 09:33:47.555249 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="extract-utilities" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.555257 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="extract-utilities" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.555592 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3540b0f-9756-4b7b-8630-80dc0fc11064" containerName="registry-server" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.557570 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.567328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.667939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.668042 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.668074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8zt\" (UniqueName: \"kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.770742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.770790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8zt\" (UniqueName: \"kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.770923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.771250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.771260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.789331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8zt\" (UniqueName: \"kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt\") pod \"certified-operators-84m6d\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.879553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.958268 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.960827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:47 crc kubenswrapper[4869]: I0314 09:33:47.969867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.077627 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.078137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.078205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftclg\" (UniqueName: \"kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.181487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.181603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftclg\" (UniqueName: \"kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.181798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.182176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.182232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.208430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftclg\" (UniqueName: \"kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg\") pod \"community-operators-84shr\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.341300 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.416382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:33:48 crc kubenswrapper[4869]: I0314 09:33:48.908917 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:33:48 crc kubenswrapper[4869]: W0314 09:33:48.910417 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf159393_f902_4c97_a7f5_689f13e4ef9e.slice/crio-1ef937ffb75dd6d678292ce4039c941627807d3901452b776730e2ab63ff7367 WatchSource:0}: Error finding container 1ef937ffb75dd6d678292ce4039c941627807d3901452b776730e2ab63ff7367: Status 404 returned error can't find the container with id 1ef937ffb75dd6d678292ce4039c941627807d3901452b776730e2ab63ff7367 Mar 14 09:33:49 crc kubenswrapper[4869]: I0314 09:33:49.182666 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerStarted","Data":"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301"} Mar 14 09:33:49 crc kubenswrapper[4869]: I0314 09:33:49.182915 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerStarted","Data":"1ef937ffb75dd6d678292ce4039c941627807d3901452b776730e2ab63ff7367"} Mar 14 09:33:49 crc kubenswrapper[4869]: I0314 09:33:49.190546 4869 generic.go:334] "Generic (PLEG): container finished" podID="a825be5e-86a9-48ff-bcde-60751034cee1" containerID="59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58" exitCode=0 Mar 14 09:33:49 crc kubenswrapper[4869]: I0314 09:33:49.190703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerDied","Data":"59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58"} Mar 14 09:33:49 crc kubenswrapper[4869]: I0314 09:33:49.190779 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerStarted","Data":"160d391ce52e0fccfc89a0b4fd7125b7551549e908d8801abad9afe9c330c70f"} Mar 14 09:33:50 crc kubenswrapper[4869]: I0314 09:33:50.206075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerStarted","Data":"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a"} Mar 14 09:33:50 crc kubenswrapper[4869]: I0314 09:33:50.217268 4869 generic.go:334] "Generic (PLEG): container finished" podID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerID="5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301" exitCode=0 Mar 14 09:33:50 crc kubenswrapper[4869]: I0314 09:33:50.217551 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerDied","Data":"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301"} Mar 14 09:33:50 crc kubenswrapper[4869]: I0314 09:33:50.217651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerStarted","Data":"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e"} Mar 14 09:33:50 crc kubenswrapper[4869]: E0314 09:33:50.523456 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf159393_f902_4c97_a7f5_689f13e4ef9e.slice/crio-167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf159393_f902_4c97_a7f5_689f13e4ef9e.slice/crio-conmon-167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:33:51 crc kubenswrapper[4869]: I0314 09:33:51.237090 4869 generic.go:334] "Generic (PLEG): container finished" podID="a825be5e-86a9-48ff-bcde-60751034cee1" containerID="63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a" exitCode=0 Mar 14 09:33:51 crc kubenswrapper[4869]: I0314 09:33:51.237195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerDied","Data":"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a"} Mar 14 09:33:51 crc kubenswrapper[4869]: I0314 09:33:51.240936 4869 generic.go:334] "Generic (PLEG): container finished" podID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerID="167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e" exitCode=0 Mar 14 09:33:51 crc kubenswrapper[4869]: I0314 09:33:51.240966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerDied","Data":"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e"} Mar 14 09:33:52 crc kubenswrapper[4869]: I0314 09:33:52.251347 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerStarted","Data":"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe"} Mar 14 09:33:52 crc kubenswrapper[4869]: I0314 09:33:52.256301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerStarted","Data":"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2"} Mar 14 09:33:52 crc kubenswrapper[4869]: I0314 09:33:52.296223 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84m6d" podStartSLOduration=2.836523556 podStartE2EDuration="5.296200063s" podCreationTimestamp="2026-03-14 09:33:47 +0000 UTC" firstStartedPulling="2026-03-14 09:33:49.192290856 +0000 UTC m=+2182.164572909" lastFinishedPulling="2026-03-14 09:33:51.651967363 +0000 UTC m=+2184.624249416" observedRunningTime="2026-03-14 09:33:52.288690428 +0000 UTC m=+2185.260972481" watchObservedRunningTime="2026-03-14 09:33:52.296200063 +0000 UTC m=+2185.268482116" Mar 14 09:33:52 crc kubenswrapper[4869]: I0314 09:33:52.316968 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84shr" podStartSLOduration=2.716484863 podStartE2EDuration="5.316946182s" podCreationTimestamp="2026-03-14 09:33:47 +0000 UTC" firstStartedPulling="2026-03-14 09:33:49.187677643 +0000 UTC m=+2182.159959696" lastFinishedPulling="2026-03-14 09:33:51.788138962 +0000 UTC m=+2184.760421015" observedRunningTime="2026-03-14 09:33:52.314996694 +0000 UTC m=+2185.287278747" watchObservedRunningTime="2026-03-14 09:33:52.316946182 +0000 UTC m=+2185.289228245" Mar 14 09:33:54 crc kubenswrapper[4869]: I0314 09:33:54.704162 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:33:54 crc kubenswrapper[4869]: E0314 09:33:54.704885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:33:57 crc kubenswrapper[4869]: I0314 09:33:57.880181 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:57 crc kubenswrapper[4869]: I0314 09:33:57.880675 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:57 crc kubenswrapper[4869]: I0314 09:33:57.926146 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:58 crc kubenswrapper[4869]: I0314 09:33:58.344977 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:58 crc kubenswrapper[4869]: I0314 09:33:58.346673 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:58 crc kubenswrapper[4869]: I0314 09:33:58.457930 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:33:58 crc kubenswrapper[4869]: I0314 09:33:58.495797 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:33:58 crc kubenswrapper[4869]: I0314 09:33:58.704882 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:33:58 crc kubenswrapper[4869]: E0314 09:33:58.705172 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:33:59 crc kubenswrapper[4869]: I0314 09:33:59.164302 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:33:59 crc kubenswrapper[4869]: I0314 09:33:59.464604 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.156394 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558014-6hq9c"] Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.159107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.161915 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.162490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.162679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.168161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-6hq9c"] Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.258714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ml6\" (UniqueName: \"kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6\") pod \"auto-csr-approver-29558014-6hq9c\" (UID: \"d395f158-c9ef-4b84-933d-13574bfb9445\") " pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.360981 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ml6\" (UniqueName: \"kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6\") pod \"auto-csr-approver-29558014-6hq9c\" (UID: \"d395f158-c9ef-4b84-933d-13574bfb9445\") " pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.380394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ml6\" (UniqueName: \"kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6\") pod \"auto-csr-approver-29558014-6hq9c\" (UID: \"d395f158-c9ef-4b84-933d-13574bfb9445\") " pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.410768 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84m6d" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="registry-server" containerID="cri-o://b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe" gracePeriod=2 Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.484223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:00 crc kubenswrapper[4869]: I0314 09:34:00.956346 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.036369 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-6hq9c"] Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.076070 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content\") pod \"a825be5e-86a9-48ff-bcde-60751034cee1\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.076246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities\") pod \"a825be5e-86a9-48ff-bcde-60751034cee1\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.076286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8zt\" (UniqueName: \"kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt\") pod \"a825be5e-86a9-48ff-bcde-60751034cee1\" (UID: \"a825be5e-86a9-48ff-bcde-60751034cee1\") " Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.077379 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities" (OuterVolumeSpecName: "utilities") pod "a825be5e-86a9-48ff-bcde-60751034cee1" (UID: "a825be5e-86a9-48ff-bcde-60751034cee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.082492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt" (OuterVolumeSpecName: "kube-api-access-fb8zt") pod "a825be5e-86a9-48ff-bcde-60751034cee1" (UID: "a825be5e-86a9-48ff-bcde-60751034cee1"). InnerVolumeSpecName "kube-api-access-fb8zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.137125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a825be5e-86a9-48ff-bcde-60751034cee1" (UID: "a825be5e-86a9-48ff-bcde-60751034cee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.178696 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.178728 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a825be5e-86a9-48ff-bcde-60751034cee1-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.178739 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8zt\" (UniqueName: \"kubernetes.io/projected/a825be5e-86a9-48ff-bcde-60751034cee1-kube-api-access-fb8zt\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.430568 4869 generic.go:334] "Generic (PLEG): container finished" podID="a825be5e-86a9-48ff-bcde-60751034cee1" containerID="b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe" exitCode=0 Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.430626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerDied","Data":"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe"} Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.431166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m6d" event={"ID":"a825be5e-86a9-48ff-bcde-60751034cee1","Type":"ContainerDied","Data":"160d391ce52e0fccfc89a0b4fd7125b7551549e908d8801abad9afe9c330c70f"} Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.431193 4869 scope.go:117] "RemoveContainer" containerID="b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.430669 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m6d" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.433061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" event={"ID":"d395f158-c9ef-4b84-933d-13574bfb9445","Type":"ContainerStarted","Data":"cda5323eda00bec885c1deef1f57e46466f4e097178529a8286193416b778c51"} Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.463820 4869 scope.go:117] "RemoveContainer" containerID="63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.477702 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.485750 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84m6d"] Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.491392 4869 scope.go:117] "RemoveContainer" containerID="59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.546008 4869 scope.go:117] "RemoveContainer" containerID="b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe" Mar 14 09:34:01 crc kubenswrapper[4869]: E0314 09:34:01.546544 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe\": container with ID starting with b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe not found: ID does not exist" containerID="b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.546594 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe"} err="failed to get container status \"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe\": rpc error: code = NotFound desc = could not find container \"b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe\": container with ID starting with b61e32058c3edcf9b0d7b94c7bb3fbaa8569e9e1c8a349f9b8c8f75d46a742fe not found: ID does not exist" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.546620 4869 scope.go:117] "RemoveContainer" containerID="63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a" Mar 14 09:34:01 crc kubenswrapper[4869]: E0314 09:34:01.547203 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a\": container with ID starting with 63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a not found: ID does not exist" containerID="63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.547259 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a"} err="failed to get container status \"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a\": rpc error: code = NotFound desc = could not find container \"63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a\": container with ID starting with 63bfec93e27612a4728e945fc4725feec18955f88f647972dc66d0acac5f053a not found: ID does not exist" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.547292 4869 scope.go:117] "RemoveContainer" containerID="59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58" Mar 14 09:34:01 crc kubenswrapper[4869]: E0314 09:34:01.547724 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58\": container with ID starting with 59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58 not found: ID does not exist" containerID="59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.547783 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58"} err="failed to get container status \"59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58\": rpc error: code = NotFound desc = could not find container \"59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58\": container with ID starting with 59a1cccf22e9a0029dc0eef94121e5dc7a922e186d941c05527dfd948729ac58 not found: ID does not exist" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.719248 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" path="/var/lib/kubelet/pods/a825be5e-86a9-48ff-bcde-60751034cee1/volumes" Mar 14 09:34:01 crc kubenswrapper[4869]: I0314 09:34:01.762358 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:34:02 crc kubenswrapper[4869]: I0314 09:34:02.445643 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84shr" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="registry-server" containerID="cri-o://968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2" gracePeriod=2 Mar 14 09:34:02 crc kubenswrapper[4869]: I0314 09:34:02.446896 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" event={"ID":"d395f158-c9ef-4b84-933d-13574bfb9445","Type":"ContainerStarted","Data":"e642ee38262928152068d33c687cd0c529953d6ff312439890e488d3634b8002"} Mar 14 09:34:02 crc kubenswrapper[4869]: I0314 09:34:02.481647 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" podStartSLOduration=1.53895155 podStartE2EDuration="2.48162096s" podCreationTimestamp="2026-03-14 09:34:00 +0000 UTC" firstStartedPulling="2026-03-14 09:34:01.031183687 +0000 UTC m=+2194.003465740" lastFinishedPulling="2026-03-14 09:34:01.973853097 +0000 UTC m=+2194.946135150" observedRunningTime="2026-03-14 09:34:02.470404815 +0000 UTC m=+2195.442686898" watchObservedRunningTime="2026-03-14 09:34:02.48162096 +0000 UTC m=+2195.453903023" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.417251 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.463622 4869 generic.go:334] "Generic (PLEG): container finished" podID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerID="968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2" exitCode=0 Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.463684 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84shr" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.463699 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerDied","Data":"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2"} Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.463727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84shr" event={"ID":"af159393-f902-4c97-a7f5-689f13e4ef9e","Type":"ContainerDied","Data":"1ef937ffb75dd6d678292ce4039c941627807d3901452b776730e2ab63ff7367"} Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.463748 4869 scope.go:117] "RemoveContainer" containerID="968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.466215 4869 generic.go:334] "Generic (PLEG): container finished" podID="d395f158-c9ef-4b84-933d-13574bfb9445" containerID="e642ee38262928152068d33c687cd0c529953d6ff312439890e488d3634b8002" exitCode=0 Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.466250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" event={"ID":"d395f158-c9ef-4b84-933d-13574bfb9445","Type":"ContainerDied","Data":"e642ee38262928152068d33c687cd0c529953d6ff312439890e488d3634b8002"} Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.493632 4869 scope.go:117] "RemoveContainer" containerID="167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.511886 4869 scope.go:117] "RemoveContainer" containerID="5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.530666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftclg\" (UniqueName: \"kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg\") pod \"af159393-f902-4c97-a7f5-689f13e4ef9e\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.532190 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content\") pod \"af159393-f902-4c97-a7f5-689f13e4ef9e\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.532289 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities\") pod \"af159393-f902-4c97-a7f5-689f13e4ef9e\" (UID: \"af159393-f902-4c97-a7f5-689f13e4ef9e\") " Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.533434 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities" (OuterVolumeSpecName: "utilities") pod "af159393-f902-4c97-a7f5-689f13e4ef9e" (UID: "af159393-f902-4c97-a7f5-689f13e4ef9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.533645 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.538100 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg" (OuterVolumeSpecName: "kube-api-access-ftclg") pod "af159393-f902-4c97-a7f5-689f13e4ef9e" (UID: "af159393-f902-4c97-a7f5-689f13e4ef9e"). InnerVolumeSpecName "kube-api-access-ftclg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.585342 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af159393-f902-4c97-a7f5-689f13e4ef9e" (UID: "af159393-f902-4c97-a7f5-689f13e4ef9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.609751 4869 scope.go:117] "RemoveContainer" containerID="968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2" Mar 14 09:34:03 crc kubenswrapper[4869]: E0314 09:34:03.610429 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2\": container with ID starting with 968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2 not found: ID does not exist" containerID="968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.610499 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2"} err="failed to get container status \"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2\": rpc error: code = NotFound desc = could not find container \"968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2\": container with ID starting with 968ec2856c4402411c5d2ce02a5a90114b0cbffd227260c164e169d19ce420e2 not found: ID does not exist" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.610660 4869 scope.go:117] "RemoveContainer" containerID="167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e" Mar 14 09:34:03 crc kubenswrapper[4869]: E0314 09:34:03.610986 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e\": container with ID starting with 167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e not found: ID does not exist" containerID="167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.611019 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e"} err="failed to get container status \"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e\": rpc error: code = NotFound desc = could not find container \"167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e\": container with ID starting with 167d2d5417295a0d429531b5b78e7e08592bffaca385ba027ce662436e2ce44e not found: ID does not exist" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.611043 4869 scope.go:117] "RemoveContainer" containerID="5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301" Mar 14 09:34:03 crc kubenswrapper[4869]: E0314 09:34:03.611248 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301\": container with ID starting with 5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301 not found: ID does not exist" containerID="5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.611271 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301"} err="failed to get container status \"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301\": rpc error: code = NotFound desc = could not find container \"5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301\": container with ID starting with 5833273ac9365a00f07677be215cd3830fcdcbc1d10cf1781760b755a4f4d301 not found: ID does not exist" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.636042 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftclg\" (UniqueName: \"kubernetes.io/projected/af159393-f902-4c97-a7f5-689f13e4ef9e-kube-api-access-ftclg\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.636342 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af159393-f902-4c97-a7f5-689f13e4ef9e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.792153 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:34:03 crc kubenswrapper[4869]: I0314 09:34:03.800056 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84shr"] Mar 14 09:34:04 crc kubenswrapper[4869]: I0314 09:34:04.843451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:04 crc kubenswrapper[4869]: I0314 09:34:04.966482 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ml6\" (UniqueName: \"kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6\") pod \"d395f158-c9ef-4b84-933d-13574bfb9445\" (UID: \"d395f158-c9ef-4b84-933d-13574bfb9445\") " Mar 14 09:34:04 crc kubenswrapper[4869]: I0314 09:34:04.973377 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6" (OuterVolumeSpecName: "kube-api-access-46ml6") pod "d395f158-c9ef-4b84-933d-13574bfb9445" (UID: "d395f158-c9ef-4b84-933d-13574bfb9445"). InnerVolumeSpecName "kube-api-access-46ml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.071442 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ml6\" (UniqueName: \"kubernetes.io/projected/d395f158-c9ef-4b84-933d-13574bfb9445-kube-api-access-46ml6\") on node \"crc\" DevicePath \"\"" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.491102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" event={"ID":"d395f158-c9ef-4b84-933d-13574bfb9445","Type":"ContainerDied","Data":"cda5323eda00bec885c1deef1f57e46466f4e097178529a8286193416b778c51"} Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.491171 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda5323eda00bec885c1deef1f57e46466f4e097178529a8286193416b778c51" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.491162 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558014-6hq9c" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.554943 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-dv5r4"] Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.563012 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558008-dv5r4"] Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.683803 4869 scope.go:117] "RemoveContainer" containerID="e49c20e0db4363df7ab9b81c9e721d25ed583852678ce43bf2ff0a329c713947" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.717096 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c45ab1-4de5-46c0-92c7-46fd95f53f74" path="/var/lib/kubelet/pods/a7c45ab1-4de5-46c0-92c7-46fd95f53f74/volumes" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.718013 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" path="/var/lib/kubelet/pods/af159393-f902-4c97-a7f5-689f13e4ef9e/volumes" Mar 14 09:34:05 crc kubenswrapper[4869]: I0314 09:34:05.753177 4869 scope.go:117] "RemoveContainer" containerID="70e1bf95f904f3b5fd934c7093e7d2268fe3294460e3fda035d65d68e6b7479e" Mar 14 09:34:09 crc kubenswrapper[4869]: I0314 09:34:09.704115 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:34:09 crc kubenswrapper[4869]: I0314 09:34:09.704654 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:34:09 crc kubenswrapper[4869]: E0314 09:34:09.704776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:34:09 crc kubenswrapper[4869]: E0314 09:34:09.705409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:34:24 crc kubenswrapper[4869]: I0314 09:34:24.704447 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:34:24 crc kubenswrapper[4869]: E0314 09:34:24.705478 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:34:24 crc kubenswrapper[4869]: I0314 09:34:24.705549 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:34:24 crc kubenswrapper[4869]: E0314 09:34:24.705796 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:34:37 crc kubenswrapper[4869]: I0314 09:34:37.713983 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:34:37 crc kubenswrapper[4869]: E0314 09:34:37.715324 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:34:39 crc kubenswrapper[4869]: I0314 09:34:39.704929 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:34:39 crc kubenswrapper[4869]: E0314 09:34:39.705642 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:34:49 crc kubenswrapper[4869]: I0314 09:34:49.710044 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:34:49 crc kubenswrapper[4869]: E0314 09:34:49.711885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:34:50 crc kubenswrapper[4869]: I0314 09:34:50.704479 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:34:50 crc kubenswrapper[4869]: E0314 09:34:50.704727 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:35:03 crc kubenswrapper[4869]: I0314 09:35:03.704081 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:35:03 crc kubenswrapper[4869]: E0314 09:35:03.704985 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:35:04 crc kubenswrapper[4869]: I0314 09:35:04.718325 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:35:04 crc kubenswrapper[4869]: E0314 09:35:04.718684 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:35:09 crc kubenswrapper[4869]: I0314 09:35:09.605378 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:35:09 crc kubenswrapper[4869]: I0314 09:35:09.606121 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:35:15 crc kubenswrapper[4869]: I0314 09:35:15.704915 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:35:15 crc kubenswrapper[4869]: E0314 09:35:15.705898 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:35:18 crc kubenswrapper[4869]: I0314 09:35:18.704060 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:35:18 crc kubenswrapper[4869]: E0314 09:35:18.704914 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.607342 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608460 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="extract-content" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608487 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="extract-content" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="extract-utilities" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="extract-utilities" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608561 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608567 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608587 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d395f158-c9ef-4b84-933d-13574bfb9445" containerName="oc" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608593 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d395f158-c9ef-4b84-933d-13574bfb9445" containerName="oc" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608604 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608611 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608623 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="extract-utilities" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608629 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="extract-utilities" Mar 14 09:35:24 crc kubenswrapper[4869]: E0314 09:35:24.608641 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="extract-content" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608647 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="extract-content" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608818 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a825be5e-86a9-48ff-bcde-60751034cee1" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608831 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="af159393-f902-4c97-a7f5-689f13e4ef9e" containerName="registry-server" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.608848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d395f158-c9ef-4b84-933d-13574bfb9445" containerName="oc" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.610247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.620973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.697063 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.697103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb224\" (UniqueName: \"kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.697140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.798958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.799322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb224\" (UniqueName: \"kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.799465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.799498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.800110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.822477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb224\" (UniqueName: \"kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224\") pod \"redhat-marketplace-rkx99\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:24 crc kubenswrapper[4869]: I0314 09:35:24.929824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:25 crc kubenswrapper[4869]: I0314 09:35:25.393258 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:25 crc kubenswrapper[4869]: W0314 09:35:25.395025 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e380d6_8824_4965_918b_2813728afe96.slice/crio-bbf9f7245274ac9b1dfd772208e64e1b8901fa6fc62af1cbc15f8498ade2faf6 WatchSource:0}: Error finding container bbf9f7245274ac9b1dfd772208e64e1b8901fa6fc62af1cbc15f8498ade2faf6: Status 404 returned error can't find the container with id bbf9f7245274ac9b1dfd772208e64e1b8901fa6fc62af1cbc15f8498ade2faf6 Mar 14 09:35:26 crc kubenswrapper[4869]: I0314 09:35:26.271036 4869 generic.go:334] "Generic (PLEG): container finished" podID="31e380d6-8824-4965-918b-2813728afe96" containerID="93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d" exitCode=0 Mar 14 09:35:26 crc kubenswrapper[4869]: I0314 09:35:26.271078 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerDied","Data":"93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d"} Mar 14 09:35:26 crc kubenswrapper[4869]: I0314 09:35:26.271102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerStarted","Data":"bbf9f7245274ac9b1dfd772208e64e1b8901fa6fc62af1cbc15f8498ade2faf6"} Mar 14 09:35:26 crc kubenswrapper[4869]: I0314 09:35:26.274570 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:35:27 crc kubenswrapper[4869]: I0314 09:35:27.284361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerStarted","Data":"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454"} Mar 14 09:35:28 crc kubenswrapper[4869]: I0314 09:35:28.297856 4869 generic.go:334] "Generic (PLEG): container finished" podID="31e380d6-8824-4965-918b-2813728afe96" containerID="edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454" exitCode=0 Mar 14 09:35:28 crc kubenswrapper[4869]: I0314 09:35:28.297941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerDied","Data":"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454"} Mar 14 09:35:29 crc kubenswrapper[4869]: I0314 09:35:29.309392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerStarted","Data":"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024"} Mar 14 09:35:29 crc kubenswrapper[4869]: I0314 09:35:29.329305 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rkx99" podStartSLOduration=2.908630765 podStartE2EDuration="5.329269709s" podCreationTimestamp="2026-03-14 09:35:24 +0000 UTC" firstStartedPulling="2026-03-14 09:35:26.274366782 +0000 UTC m=+2279.246648835" lastFinishedPulling="2026-03-14 09:35:28.695005726 +0000 UTC m=+2281.667287779" observedRunningTime="2026-03-14 09:35:29.327835224 +0000 UTC m=+2282.300117277" watchObservedRunningTime="2026-03-14 09:35:29.329269709 +0000 UTC m=+2282.301551762" Mar 14 09:35:29 crc kubenswrapper[4869]: I0314 09:35:29.704759 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:35:29 crc kubenswrapper[4869]: E0314 09:35:29.705061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:35:30 crc kubenswrapper[4869]: I0314 09:35:30.704766 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:35:30 crc kubenswrapper[4869]: E0314 09:35:30.705336 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:35:34 crc kubenswrapper[4869]: I0314 09:35:34.930146 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:34 crc kubenswrapper[4869]: I0314 09:35:34.930627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:34 crc kubenswrapper[4869]: I0314 09:35:34.987279 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:35 crc kubenswrapper[4869]: I0314 09:35:35.420220 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:35 crc kubenswrapper[4869]: I0314 09:35:35.472045 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.386400 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rkx99" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="registry-server" containerID="cri-o://e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024" gracePeriod=2 Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.850787 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.994961 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content\") pod \"31e380d6-8824-4965-918b-2813728afe96\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.995130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities\") pod \"31e380d6-8824-4965-918b-2813728afe96\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.995159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb224\" (UniqueName: \"kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224\") pod \"31e380d6-8824-4965-918b-2813728afe96\" (UID: \"31e380d6-8824-4965-918b-2813728afe96\") " Mar 14 09:35:37 crc kubenswrapper[4869]: I0314 09:35:37.996578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities" (OuterVolumeSpecName: "utilities") pod "31e380d6-8824-4965-918b-2813728afe96" (UID: "31e380d6-8824-4965-918b-2813728afe96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.004879 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224" (OuterVolumeSpecName: "kube-api-access-zb224") pod "31e380d6-8824-4965-918b-2813728afe96" (UID: "31e380d6-8824-4965-918b-2813728afe96"). InnerVolumeSpecName "kube-api-access-zb224". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.097569 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.097601 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb224\" (UniqueName: \"kubernetes.io/projected/31e380d6-8824-4965-918b-2813728afe96-kube-api-access-zb224\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.103790 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31e380d6-8824-4965-918b-2813728afe96" (UID: "31e380d6-8824-4965-918b-2813728afe96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.199652 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e380d6-8824-4965-918b-2813728afe96-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.395760 4869 generic.go:334] "Generic (PLEG): container finished" podID="31e380d6-8824-4965-918b-2813728afe96" containerID="e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024" exitCode=0 Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.395822 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkx99" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.395844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerDied","Data":"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024"} Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.396680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkx99" event={"ID":"31e380d6-8824-4965-918b-2813728afe96","Type":"ContainerDied","Data":"bbf9f7245274ac9b1dfd772208e64e1b8901fa6fc62af1cbc15f8498ade2faf6"} Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.396703 4869 scope.go:117] "RemoveContainer" containerID="e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.425189 4869 scope.go:117] "RemoveContainer" containerID="edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.429530 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.441902 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkx99"] Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.447297 4869 scope.go:117] "RemoveContainer" containerID="93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.500536 4869 scope.go:117] "RemoveContainer" containerID="e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024" Mar 14 09:35:38 crc kubenswrapper[4869]: E0314 09:35:38.501079 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024\": container with ID starting with e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024 not found: ID does not exist" containerID="e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.501129 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024"} err="failed to get container status \"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024\": rpc error: code = NotFound desc = could not find container \"e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024\": container with ID starting with e738f99000c6f66734e3ebee6dd6537bbecd2efa2da8b12d2e10843098f3c024 not found: ID does not exist" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.501163 4869 scope.go:117] "RemoveContainer" containerID="edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454" Mar 14 09:35:38 crc kubenswrapper[4869]: E0314 09:35:38.501547 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454\": container with ID starting with edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454 not found: ID does not exist" containerID="edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.501571 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454"} err="failed to get container status \"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454\": rpc error: code = NotFound desc = could not find container \"edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454\": container with ID starting with edabe51b89c1a868f78a647e42b61e17edfa36b9bb5fc93f996c4c033b04e454 not found: ID does not exist" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.501589 4869 scope.go:117] "RemoveContainer" containerID="93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d" Mar 14 09:35:38 crc kubenswrapper[4869]: E0314 09:35:38.501891 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d\": container with ID starting with 93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d not found: ID does not exist" containerID="93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d" Mar 14 09:35:38 crc kubenswrapper[4869]: I0314 09:35:38.501938 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d"} err="failed to get container status \"93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d\": rpc error: code = NotFound desc = could not find container \"93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d\": container with ID starting with 93eefa11e394add966e1cf2de9b1d1f01b26374ad72b5c7357d87c90e45a392d not found: ID does not exist" Mar 14 09:35:39 crc kubenswrapper[4869]: I0314 09:35:39.605882 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:35:39 crc kubenswrapper[4869]: I0314 09:35:39.606279 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:35:39 crc kubenswrapper[4869]: I0314 09:35:39.716552 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e380d6-8824-4965-918b-2813728afe96" path="/var/lib/kubelet/pods/31e380d6-8824-4965-918b-2813728afe96/volumes" Mar 14 09:35:40 crc kubenswrapper[4869]: I0314 09:35:40.704657 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:35:40 crc kubenswrapper[4869]: E0314 09:35:40.704923 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:35:42 crc kubenswrapper[4869]: I0314 09:35:42.704112 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:35:42 crc kubenswrapper[4869]: E0314 09:35:42.704639 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:35:51 crc kubenswrapper[4869]: I0314 09:35:51.704467 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:35:51 crc kubenswrapper[4869]: E0314 09:35:51.705188 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:35:55 crc kubenswrapper[4869]: I0314 09:35:55.704323 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:35:55 crc kubenswrapper[4869]: E0314 09:35:55.705186 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.149214 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558016-8f2wm"] Mar 14 09:36:00 crc kubenswrapper[4869]: E0314 09:36:00.150294 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.150309 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="extract-content" Mar 14 09:36:00 crc kubenswrapper[4869]: E0314 09:36:00.150360 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.150369 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4869]: E0314 09:36:00.150386 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.150394 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="extract-utilities" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.150645 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e380d6-8824-4965-918b-2813728afe96" containerName="registry-server" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.151429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.155356 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.156618 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.156658 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.171116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-8f2wm"] Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.199678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4cz\" (UniqueName: \"kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz\") pod \"auto-csr-approver-29558016-8f2wm\" (UID: \"716760de-3737-4911-bf17-2e731be929cd\") " pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.301834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4cz\" (UniqueName: \"kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz\") pod \"auto-csr-approver-29558016-8f2wm\" (UID: \"716760de-3737-4911-bf17-2e731be929cd\") " pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.323571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4cz\" (UniqueName: \"kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz\") pod \"auto-csr-approver-29558016-8f2wm\" (UID: \"716760de-3737-4911-bf17-2e731be929cd\") " pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.473766 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:00 crc kubenswrapper[4869]: I0314 09:36:00.965742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-8f2wm"] Mar 14 09:36:01 crc kubenswrapper[4869]: I0314 09:36:01.624970 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" event={"ID":"716760de-3737-4911-bf17-2e731be929cd","Type":"ContainerStarted","Data":"b1852f003ba64eca68f0c0d51c330826d06a473ad5f0ca51218c4fb51996e628"} Mar 14 09:36:02 crc kubenswrapper[4869]: I0314 09:36:02.635731 4869 generic.go:334] "Generic (PLEG): container finished" podID="716760de-3737-4911-bf17-2e731be929cd" containerID="efae33106513458e6c2e84bb707e138b5298a6bc82b7195a14620c6ee0fe2879" exitCode=0 Mar 14 09:36:02 crc kubenswrapper[4869]: I0314 09:36:02.635877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" event={"ID":"716760de-3737-4911-bf17-2e731be929cd","Type":"ContainerDied","Data":"efae33106513458e6c2e84bb707e138b5298a6bc82b7195a14620c6ee0fe2879"} Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.062123 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.181814 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm4cz\" (UniqueName: \"kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz\") pod \"716760de-3737-4911-bf17-2e731be929cd\" (UID: \"716760de-3737-4911-bf17-2e731be929cd\") " Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.206058 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz" (OuterVolumeSpecName: "kube-api-access-gm4cz") pod "716760de-3737-4911-bf17-2e731be929cd" (UID: "716760de-3737-4911-bf17-2e731be929cd"). InnerVolumeSpecName "kube-api-access-gm4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.283675 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm4cz\" (UniqueName: \"kubernetes.io/projected/716760de-3737-4911-bf17-2e731be929cd-kube-api-access-gm4cz\") on node \"crc\" DevicePath \"\"" Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.655395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" event={"ID":"716760de-3737-4911-bf17-2e731be929cd","Type":"ContainerDied","Data":"b1852f003ba64eca68f0c0d51c330826d06a473ad5f0ca51218c4fb51996e628"} Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.655465 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1852f003ba64eca68f0c0d51c330826d06a473ad5f0ca51218c4fb51996e628" Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.655489 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558016-8f2wm" Mar 14 09:36:04 crc kubenswrapper[4869]: I0314 09:36:04.704545 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:36:04 crc kubenswrapper[4869]: E0314 09:36:04.705013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:36:05 crc kubenswrapper[4869]: I0314 09:36:05.142694 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-jkt9w"] Mar 14 09:36:05 crc kubenswrapper[4869]: I0314 09:36:05.150594 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558010-jkt9w"] Mar 14 09:36:05 crc kubenswrapper[4869]: I0314 09:36:05.717613 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b270d28-1b86-4702-b593-61c411f3c21f" path="/var/lib/kubelet/pods/6b270d28-1b86-4702-b593-61c411f3c21f/volumes" Mar 14 09:36:05 crc kubenswrapper[4869]: I0314 09:36:05.955418 4869 scope.go:117] "RemoveContainer" containerID="929fbe70504a4886fd26e8504dab716d300ba97ea5c16169cb6c4f76b69fd8df" Mar 14 09:36:06 crc kubenswrapper[4869]: I0314 09:36:06.704088 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:36:06 crc kubenswrapper[4869]: E0314 09:36:06.706645 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:36:09 crc kubenswrapper[4869]: I0314 09:36:09.605901 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:36:09 crc kubenswrapper[4869]: I0314 09:36:09.606220 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:36:09 crc kubenswrapper[4869]: I0314 09:36:09.606276 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:36:09 crc kubenswrapper[4869]: I0314 09:36:09.607002 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:36:09 crc kubenswrapper[4869]: I0314 09:36:09.607072 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" gracePeriod=600 Mar 14 09:36:09 crc kubenswrapper[4869]: E0314 09:36:09.748870 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:36:10 crc kubenswrapper[4869]: I0314 09:36:10.716359 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" exitCode=0 Mar 14 09:36:10 crc kubenswrapper[4869]: I0314 09:36:10.716458 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd"} Mar 14 09:36:10 crc kubenswrapper[4869]: I0314 09:36:10.717116 4869 scope.go:117] "RemoveContainer" containerID="bdb619bb204b11c09dc2ed986fb9d6c329d9b4662bc4af33573b397425dd1bcd" Mar 14 09:36:10 crc kubenswrapper[4869]: I0314 09:36:10.717853 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:36:10 crc kubenswrapper[4869]: E0314 09:36:10.718170 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:36:16 crc kubenswrapper[4869]: I0314 09:36:16.703755 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:36:16 crc kubenswrapper[4869]: E0314 09:36:16.704347 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:36:21 crc kubenswrapper[4869]: I0314 09:36:21.706292 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:36:21 crc kubenswrapper[4869]: I0314 09:36:21.707049 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:36:21 crc kubenswrapper[4869]: E0314 09:36:21.707244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:36:21 crc kubenswrapper[4869]: E0314 09:36:21.707470 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:36:30 crc kubenswrapper[4869]: I0314 09:36:30.704599 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:36:30 crc kubenswrapper[4869]: E0314 09:36:30.705744 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:36:33 crc kubenswrapper[4869]: I0314 09:36:33.704050 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:36:33 crc kubenswrapper[4869]: E0314 09:36:33.704669 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:36:34 crc kubenswrapper[4869]: I0314 09:36:34.703465 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:36:34 crc kubenswrapper[4869]: E0314 09:36:34.707769 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:36:45 crc kubenswrapper[4869]: I0314 09:36:45.703611 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:36:45 crc kubenswrapper[4869]: E0314 09:36:45.704329 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:36:47 crc kubenswrapper[4869]: I0314 09:36:47.710336 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:36:48 crc kubenswrapper[4869]: I0314 09:36:48.068395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08"} Mar 14 09:36:48 crc kubenswrapper[4869]: I0314 09:36:48.703929 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:36:48 crc kubenswrapper[4869]: E0314 09:36:48.704207 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:36:54 crc kubenswrapper[4869]: I0314 09:36:54.539320 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:36:54 crc kubenswrapper[4869]: I0314 09:36:54.539981 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:36:56 crc kubenswrapper[4869]: I0314 09:36:56.143234 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" exitCode=1 Mar 14 09:36:56 crc kubenswrapper[4869]: I0314 09:36:56.143302 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08"} Mar 14 09:36:56 crc kubenswrapper[4869]: I0314 09:36:56.143702 4869 scope.go:117] "RemoveContainer" containerID="fa35414ca56489ee6cf4e4b620dfd06e725646fe68f3d491dbfa1ad53d8d5970" Mar 14 09:36:56 crc kubenswrapper[4869]: I0314 09:36:56.144805 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:36:56 crc kubenswrapper[4869]: E0314 09:36:56.145333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:36:57 crc kubenswrapper[4869]: I0314 09:36:57.714962 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:36:57 crc kubenswrapper[4869]: E0314 09:36:57.715535 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:36:59 crc kubenswrapper[4869]: I0314 09:36:59.704714 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:36:59 crc kubenswrapper[4869]: E0314 09:36:59.705426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:37:04 crc kubenswrapper[4869]: I0314 09:37:04.538715 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:37:04 crc kubenswrapper[4869]: I0314 09:37:04.541392 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:37:04 crc kubenswrapper[4869]: I0314 09:37:04.542315 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:04 crc kubenswrapper[4869]: E0314 09:37:04.542997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:37:05 crc kubenswrapper[4869]: I0314 09:37:05.255582 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:05 crc kubenswrapper[4869]: E0314 09:37:05.255817 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:37:10 crc kubenswrapper[4869]: I0314 09:37:10.703663 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:37:10 crc kubenswrapper[4869]: E0314 09:37:10.704385 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:37:11 crc kubenswrapper[4869]: I0314 09:37:11.704584 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:37:12 crc kubenswrapper[4869]: I0314 09:37:12.318824 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005"} Mar 14 09:37:14 crc kubenswrapper[4869]: I0314 09:37:14.404976 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:37:14 crc kubenswrapper[4869]: I0314 09:37:14.406918 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:37:19 crc kubenswrapper[4869]: I0314 09:37:19.711423 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:19 crc kubenswrapper[4869]: E0314 09:37:19.712355 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:37:20 crc kubenswrapper[4869]: I0314 09:37:20.415310 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" exitCode=1 Mar 14 09:37:20 crc kubenswrapper[4869]: I0314 09:37:20.415376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005"} Mar 14 09:37:20 crc kubenswrapper[4869]: I0314 09:37:20.415474 4869 scope.go:117] "RemoveContainer" containerID="e64b460bfd5102a0286e521531b7c67dedaa738a0adf67f27e47eec18d5ef99e" Mar 14 09:37:20 crc kubenswrapper[4869]: I0314 09:37:20.416394 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:37:20 crc kubenswrapper[4869]: E0314 09:37:20.416777 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:37:23 crc kubenswrapper[4869]: I0314 09:37:23.704467 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:37:23 crc kubenswrapper[4869]: E0314 09:37:23.706278 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:37:24 crc kubenswrapper[4869]: I0314 09:37:24.404372 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:37:24 crc kubenswrapper[4869]: I0314 09:37:24.404731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:37:24 crc kubenswrapper[4869]: I0314 09:37:24.405446 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:37:24 crc kubenswrapper[4869]: E0314 09:37:24.405652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:37:30 crc kubenswrapper[4869]: I0314 09:37:30.704354 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:30 crc kubenswrapper[4869]: E0314 09:37:30.705078 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:37:36 crc kubenswrapper[4869]: I0314 09:37:36.704357 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:37:36 crc kubenswrapper[4869]: E0314 09:37:36.705458 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:37:37 crc kubenswrapper[4869]: I0314 09:37:37.710329 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:37:37 crc kubenswrapper[4869]: E0314 09:37:37.710820 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:37:43 crc kubenswrapper[4869]: I0314 09:37:43.705747 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:43 crc kubenswrapper[4869]: E0314 09:37:43.707108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:37:47 crc kubenswrapper[4869]: I0314 09:37:47.712178 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:37:47 crc kubenswrapper[4869]: E0314 09:37:47.712936 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:37:49 crc kubenswrapper[4869]: I0314 09:37:49.704604 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:37:49 crc kubenswrapper[4869]: E0314 09:37:49.705401 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:37:54 crc kubenswrapper[4869]: I0314 09:37:54.703872 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:37:54 crc kubenswrapper[4869]: E0314 09:37:54.704563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.166637 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558018-q64kr"] Mar 14 09:38:00 crc kubenswrapper[4869]: E0314 09:38:00.167726 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="716760de-3737-4911-bf17-2e731be929cd" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.167746 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="716760de-3737-4911-bf17-2e731be929cd" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.168028 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="716760de-3737-4911-bf17-2e731be929cd" containerName="oc" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.169007 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.171479 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.171695 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.171775 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.190561 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-q64kr"] Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.301017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn7k5\" (UniqueName: \"kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5\") pod \"auto-csr-approver-29558018-q64kr\" (UID: \"cd1362cc-66c4-4c5f-a45e-29eefe172762\") " pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.402778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn7k5\" (UniqueName: \"kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5\") pod \"auto-csr-approver-29558018-q64kr\" (UID: \"cd1362cc-66c4-4c5f-a45e-29eefe172762\") " pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.422883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn7k5\" (UniqueName: \"kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5\") pod \"auto-csr-approver-29558018-q64kr\" (UID: \"cd1362cc-66c4-4c5f-a45e-29eefe172762\") " pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.503872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.703917 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:38:00 crc kubenswrapper[4869]: E0314 09:38:00.704548 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:38:00 crc kubenswrapper[4869]: I0314 09:38:00.978107 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-q64kr"] Mar 14 09:38:00 crc kubenswrapper[4869]: W0314 09:38:00.985158 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd1362cc_66c4_4c5f_a45e_29eefe172762.slice/crio-b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c WatchSource:0}: Error finding container b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c: Status 404 returned error can't find the container with id b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c Mar 14 09:38:01 crc kubenswrapper[4869]: I0314 09:38:01.704619 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:38:01 crc kubenswrapper[4869]: E0314 09:38:01.705197 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:38:01 crc kubenswrapper[4869]: I0314 09:38:01.812108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-q64kr" event={"ID":"cd1362cc-66c4-4c5f-a45e-29eefe172762","Type":"ContainerStarted","Data":"b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c"} Mar 14 09:38:02 crc kubenswrapper[4869]: I0314 09:38:02.826491 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd1362cc-66c4-4c5f-a45e-29eefe172762" containerID="6477206c4a9ba42b95be6a23539f7da18ff48dd67b970f52ad9aeee87883de98" exitCode=0 Mar 14 09:38:02 crc kubenswrapper[4869]: I0314 09:38:02.826587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-q64kr" event={"ID":"cd1362cc-66c4-4c5f-a45e-29eefe172762","Type":"ContainerDied","Data":"6477206c4a9ba42b95be6a23539f7da18ff48dd67b970f52ad9aeee87883de98"} Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.208249 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.399513 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn7k5\" (UniqueName: \"kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5\") pod \"cd1362cc-66c4-4c5f-a45e-29eefe172762\" (UID: \"cd1362cc-66c4-4c5f-a45e-29eefe172762\") " Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.411849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5" (OuterVolumeSpecName: "kube-api-access-pn7k5") pod "cd1362cc-66c4-4c5f-a45e-29eefe172762" (UID: "cd1362cc-66c4-4c5f-a45e-29eefe172762"). InnerVolumeSpecName "kube-api-access-pn7k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.507199 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn7k5\" (UniqueName: \"kubernetes.io/projected/cd1362cc-66c4-4c5f-a45e-29eefe172762-kube-api-access-pn7k5\") on node \"crc\" DevicePath \"\"" Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.845071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558018-q64kr" event={"ID":"cd1362cc-66c4-4c5f-a45e-29eefe172762","Type":"ContainerDied","Data":"b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c"} Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.845122 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558018-q64kr" Mar 14 09:38:04 crc kubenswrapper[4869]: I0314 09:38:04.845129 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53a1cd00be0acda6bb77bdb8eaa3fa563494010ff1771bc34ac5c0fa43d053c" Mar 14 09:38:05 crc kubenswrapper[4869]: I0314 09:38:05.279302 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-6flkj"] Mar 14 09:38:05 crc kubenswrapper[4869]: I0314 09:38:05.289939 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558012-6flkj"] Mar 14 09:38:05 crc kubenswrapper[4869]: I0314 09:38:05.704103 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:38:05 crc kubenswrapper[4869]: E0314 09:38:05.704406 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:38:05 crc kubenswrapper[4869]: I0314 09:38:05.715553 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1d7dbb-6c49-4285-ba32-6f7ec559b77c" path="/var/lib/kubelet/pods/8a1d7dbb-6c49-4285-ba32-6f7ec559b77c/volumes" Mar 14 09:38:06 crc kubenswrapper[4869]: I0314 09:38:06.094105 4869 scope.go:117] "RemoveContainer" containerID="5b571477932a16c70268eb0a1e629653f7d2ae0f050acb002279f80844537b0a" Mar 14 09:38:14 crc kubenswrapper[4869]: I0314 09:38:14.703875 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:38:14 crc kubenswrapper[4869]: E0314 09:38:14.704665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:38:15 crc kubenswrapper[4869]: I0314 09:38:15.704832 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:38:15 crc kubenswrapper[4869]: E0314 09:38:15.705760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:38:20 crc kubenswrapper[4869]: I0314 09:38:20.704197 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:38:20 crc kubenswrapper[4869]: E0314 09:38:20.704998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:38:25 crc kubenswrapper[4869]: I0314 09:38:25.704424 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:38:25 crc kubenswrapper[4869]: E0314 09:38:25.705438 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:38:27 crc kubenswrapper[4869]: I0314 09:38:27.711294 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:38:27 crc kubenswrapper[4869]: E0314 09:38:27.711871 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:38:33 crc kubenswrapper[4869]: I0314 09:38:33.703874 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:38:33 crc kubenswrapper[4869]: E0314 09:38:33.705035 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:38:39 crc kubenswrapper[4869]: I0314 09:38:39.704914 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:38:39 crc kubenswrapper[4869]: E0314 09:38:39.705992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:38:41 crc kubenswrapper[4869]: I0314 09:38:41.704363 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:38:41 crc kubenswrapper[4869]: E0314 09:38:41.705324 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:38:48 crc kubenswrapper[4869]: I0314 09:38:48.703710 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:38:48 crc kubenswrapper[4869]: E0314 09:38:48.704397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:38:54 crc kubenswrapper[4869]: I0314 09:38:54.704562 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:38:54 crc kubenswrapper[4869]: E0314 09:38:54.705868 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:38:55 crc kubenswrapper[4869]: I0314 09:38:55.704642 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:38:55 crc kubenswrapper[4869]: E0314 09:38:55.705121 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:38:59 crc kubenswrapper[4869]: I0314 09:38:59.704762 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:38:59 crc kubenswrapper[4869]: E0314 09:38:59.706867 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:39:06 crc kubenswrapper[4869]: I0314 09:39:06.704113 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:39:06 crc kubenswrapper[4869]: E0314 09:39:06.704898 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:39:07 crc kubenswrapper[4869]: I0314 09:39:07.717761 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:39:07 crc kubenswrapper[4869]: E0314 09:39:07.718212 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:39:14 crc kubenswrapper[4869]: I0314 09:39:14.704082 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:39:14 crc kubenswrapper[4869]: E0314 09:39:14.704877 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:39:18 crc kubenswrapper[4869]: I0314 09:39:18.704252 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:39:18 crc kubenswrapper[4869]: E0314 09:39:18.704882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:39:20 crc kubenswrapper[4869]: I0314 09:39:20.704972 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:39:20 crc kubenswrapper[4869]: E0314 09:39:20.705635 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:39:29 crc kubenswrapper[4869]: I0314 09:39:29.703386 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:39:29 crc kubenswrapper[4869]: E0314 09:39:29.704174 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:39:32 crc kubenswrapper[4869]: I0314 09:39:32.704255 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:39:32 crc kubenswrapper[4869]: E0314 09:39:32.704967 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:39:33 crc kubenswrapper[4869]: I0314 09:39:33.705047 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:39:33 crc kubenswrapper[4869]: E0314 09:39:33.705618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:39:41 crc kubenswrapper[4869]: I0314 09:39:41.704991 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:39:41 crc kubenswrapper[4869]: E0314 09:39:41.706208 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:39:45 crc kubenswrapper[4869]: I0314 09:39:45.704986 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:39:45 crc kubenswrapper[4869]: E0314 09:39:45.705968 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:39:46 crc kubenswrapper[4869]: I0314 09:39:46.704171 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:39:46 crc kubenswrapper[4869]: E0314 09:39:46.704498 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:39:53 crc kubenswrapper[4869]: I0314 09:39:53.705429 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:39:53 crc kubenswrapper[4869]: E0314 09:39:53.706301 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:39:58 crc kubenswrapper[4869]: I0314 09:39:58.703579 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:39:58 crc kubenswrapper[4869]: E0314 09:39:58.704219 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:39:58 crc kubenswrapper[4869]: I0314 09:39:58.705614 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:39:58 crc kubenswrapper[4869]: E0314 09:39:58.706899 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.151597 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558020-ksrcm"] Mar 14 09:40:00 crc kubenswrapper[4869]: E0314 09:40:00.152227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1362cc-66c4-4c5f-a45e-29eefe172762" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.152240 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1362cc-66c4-4c5f-a45e-29eefe172762" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.152458 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1362cc-66c4-4c5f-a45e-29eefe172762" containerName="oc" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.153238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.156338 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.157162 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.157646 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.160128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-ksrcm"] Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.315961 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt96b\" (UniqueName: \"kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b\") pod \"auto-csr-approver-29558020-ksrcm\" (UID: \"6fda3e65-ac70-412f-aa48-ed55a48c4774\") " pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.419685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt96b\" (UniqueName: \"kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b\") pod \"auto-csr-approver-29558020-ksrcm\" (UID: \"6fda3e65-ac70-412f-aa48-ed55a48c4774\") " pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.440218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt96b\" (UniqueName: \"kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b\") pod \"auto-csr-approver-29558020-ksrcm\" (UID: \"6fda3e65-ac70-412f-aa48-ed55a48c4774\") " pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.478180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:00 crc kubenswrapper[4869]: I0314 09:40:00.975873 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-ksrcm"] Mar 14 09:40:01 crc kubenswrapper[4869]: I0314 09:40:01.210986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" event={"ID":"6fda3e65-ac70-412f-aa48-ed55a48c4774","Type":"ContainerStarted","Data":"2240a5dd9b939ae91ebd161dc3347419c7556ee93eb8be416e848f00876b6a2a"} Mar 14 09:40:03 crc kubenswrapper[4869]: I0314 09:40:03.242556 4869 generic.go:334] "Generic (PLEG): container finished" podID="6fda3e65-ac70-412f-aa48-ed55a48c4774" containerID="1dd86f6ad1be40932867bf321d0df6cc0685194318cd511353ed4cb59efeb9f4" exitCode=0 Mar 14 09:40:03 crc kubenswrapper[4869]: I0314 09:40:03.242825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" event={"ID":"6fda3e65-ac70-412f-aa48-ed55a48c4774","Type":"ContainerDied","Data":"1dd86f6ad1be40932867bf321d0df6cc0685194318cd511353ed4cb59efeb9f4"} Mar 14 09:40:04 crc kubenswrapper[4869]: I0314 09:40:04.681494 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:04 crc kubenswrapper[4869]: I0314 09:40:04.705371 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:40:04 crc kubenswrapper[4869]: E0314 09:40:04.705752 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:40:04 crc kubenswrapper[4869]: I0314 09:40:04.820733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt96b\" (UniqueName: \"kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b\") pod \"6fda3e65-ac70-412f-aa48-ed55a48c4774\" (UID: \"6fda3e65-ac70-412f-aa48-ed55a48c4774\") " Mar 14 09:40:04 crc kubenswrapper[4869]: I0314 09:40:04.827567 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b" (OuterVolumeSpecName: "kube-api-access-bt96b") pod "6fda3e65-ac70-412f-aa48-ed55a48c4774" (UID: "6fda3e65-ac70-412f-aa48-ed55a48c4774"). InnerVolumeSpecName "kube-api-access-bt96b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:40:04 crc kubenswrapper[4869]: I0314 09:40:04.924019 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt96b\" (UniqueName: \"kubernetes.io/projected/6fda3e65-ac70-412f-aa48-ed55a48c4774-kube-api-access-bt96b\") on node \"crc\" DevicePath \"\"" Mar 14 09:40:05 crc kubenswrapper[4869]: I0314 09:40:05.306556 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" event={"ID":"6fda3e65-ac70-412f-aa48-ed55a48c4774","Type":"ContainerDied","Data":"2240a5dd9b939ae91ebd161dc3347419c7556ee93eb8be416e848f00876b6a2a"} Mar 14 09:40:05 crc kubenswrapper[4869]: I0314 09:40:05.306615 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2240a5dd9b939ae91ebd161dc3347419c7556ee93eb8be416e848f00876b6a2a" Mar 14 09:40:05 crc kubenswrapper[4869]: I0314 09:40:05.307051 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558020-ksrcm" Mar 14 09:40:05 crc kubenswrapper[4869]: I0314 09:40:05.760946 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-6hq9c"] Mar 14 09:40:05 crc kubenswrapper[4869]: I0314 09:40:05.769646 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558014-6hq9c"] Mar 14 09:40:07 crc kubenswrapper[4869]: I0314 09:40:07.717406 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d395f158-c9ef-4b84-933d-13574bfb9445" path="/var/lib/kubelet/pods/d395f158-c9ef-4b84-933d-13574bfb9445/volumes" Mar 14 09:40:12 crc kubenswrapper[4869]: I0314 09:40:12.703627 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:40:12 crc kubenswrapper[4869]: E0314 09:40:12.712829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:40:13 crc kubenswrapper[4869]: I0314 09:40:13.705017 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:40:13 crc kubenswrapper[4869]: E0314 09:40:13.705609 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:40:16 crc kubenswrapper[4869]: I0314 09:40:16.704003 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:40:16 crc kubenswrapper[4869]: E0314 09:40:16.704952 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:40:23 crc kubenswrapper[4869]: I0314 09:40:23.704480 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:40:23 crc kubenswrapper[4869]: E0314 09:40:23.705307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:40:27 crc kubenswrapper[4869]: I0314 09:40:27.712737 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:40:27 crc kubenswrapper[4869]: E0314 09:40:27.713548 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:40:31 crc kubenswrapper[4869]: I0314 09:40:31.704131 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:40:31 crc kubenswrapper[4869]: E0314 09:40:31.705004 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:40:38 crc kubenswrapper[4869]: I0314 09:40:38.704271 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:40:38 crc kubenswrapper[4869]: E0314 09:40:38.705086 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:40:42 crc kubenswrapper[4869]: I0314 09:40:42.704756 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:40:42 crc kubenswrapper[4869]: E0314 09:40:42.705508 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:40:45 crc kubenswrapper[4869]: I0314 09:40:45.703973 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:40:45 crc kubenswrapper[4869]: E0314 09:40:45.704484 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:40:49 crc kubenswrapper[4869]: I0314 09:40:49.703743 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:40:49 crc kubenswrapper[4869]: E0314 09:40:49.704288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:40:55 crc kubenswrapper[4869]: I0314 09:40:55.703725 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:40:55 crc kubenswrapper[4869]: E0314 09:40:55.704805 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:40:57 crc kubenswrapper[4869]: I0314 09:40:57.710135 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:40:57 crc kubenswrapper[4869]: E0314 09:40:57.710881 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:41:00 crc kubenswrapper[4869]: I0314 09:41:00.705066 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:41:00 crc kubenswrapper[4869]: E0314 09:41:00.705688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:41:06 crc kubenswrapper[4869]: I0314 09:41:06.221316 4869 scope.go:117] "RemoveContainer" containerID="e642ee38262928152068d33c687cd0c529953d6ff312439890e488d3634b8002" Mar 14 09:41:06 crc kubenswrapper[4869]: I0314 09:41:06.705064 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:41:06 crc kubenswrapper[4869]: E0314 09:41:06.705838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:41:11 crc kubenswrapper[4869]: I0314 09:41:11.703389 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:41:11 crc kubenswrapper[4869]: E0314 09:41:11.704135 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:41:15 crc kubenswrapper[4869]: I0314 09:41:15.704258 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:41:15 crc kubenswrapper[4869]: E0314 09:41:15.705004 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:41:21 crc kubenswrapper[4869]: I0314 09:41:21.703762 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:41:21 crc kubenswrapper[4869]: I0314 09:41:21.966779 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7"} Mar 14 09:41:23 crc kubenswrapper[4869]: I0314 09:41:23.704218 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:41:23 crc kubenswrapper[4869]: E0314 09:41:23.705165 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:41:27 crc kubenswrapper[4869]: I0314 09:41:27.712879 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:41:27 crc kubenswrapper[4869]: E0314 09:41:27.713682 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:41:34 crc kubenswrapper[4869]: I0314 09:41:34.704760 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:41:34 crc kubenswrapper[4869]: E0314 09:41:34.705623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:41:40 crc kubenswrapper[4869]: I0314 09:41:40.703861 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:41:40 crc kubenswrapper[4869]: E0314 09:41:40.704600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:41:46 crc kubenswrapper[4869]: I0314 09:41:46.704898 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:41:46 crc kubenswrapper[4869]: E0314 09:41:46.706105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:41:52 crc kubenswrapper[4869]: I0314 09:41:52.704730 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:41:52 crc kubenswrapper[4869]: E0314 09:41:52.705763 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:41:57 crc kubenswrapper[4869]: I0314 09:41:57.713550 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:41:58 crc kubenswrapper[4869]: I0314 09:41:58.341594 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312"} Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.157376 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558022-vgmfn"] Mar 14 09:42:00 crc kubenswrapper[4869]: E0314 09:42:00.158305 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fda3e65-ac70-412f-aa48-ed55a48c4774" containerName="oc" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.158322 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fda3e65-ac70-412f-aa48-ed55a48c4774" containerName="oc" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.158494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fda3e65-ac70-412f-aa48-ed55a48c4774" containerName="oc" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.159312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.161602 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.161863 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.162502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.170544 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-vgmfn"] Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.258999 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dt2\" (UniqueName: \"kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2\") pod \"auto-csr-approver-29558022-vgmfn\" (UID: \"72a3866f-8a94-4c92-bf28-c86ae06c677b\") " pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.359941 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5dt2\" (UniqueName: \"kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2\") pod \"auto-csr-approver-29558022-vgmfn\" (UID: \"72a3866f-8a94-4c92-bf28-c86ae06c677b\") " pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.382285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5dt2\" (UniqueName: \"kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2\") pod \"auto-csr-approver-29558022-vgmfn\" (UID: \"72a3866f-8a94-4c92-bf28-c86ae06c677b\") " pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.478603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:00 crc kubenswrapper[4869]: W0314 09:42:00.961161 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a3866f_8a94_4c92_bf28_c86ae06c677b.slice/crio-c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b WatchSource:0}: Error finding container c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b: Status 404 returned error can't find the container with id c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.964171 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:42:00 crc kubenswrapper[4869]: I0314 09:42:00.967907 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-vgmfn"] Mar 14 09:42:01 crc kubenswrapper[4869]: I0314 09:42:01.370674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" event={"ID":"72a3866f-8a94-4c92-bf28-c86ae06c677b","Type":"ContainerStarted","Data":"c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b"} Mar 14 09:42:02 crc kubenswrapper[4869]: I0314 09:42:02.380815 4869 generic.go:334] "Generic (PLEG): container finished" podID="72a3866f-8a94-4c92-bf28-c86ae06c677b" containerID="73854500fab37c9a4a5c48dae7fc2041d64e44c6e99ffca659f073948a2ba003" exitCode=0 Mar 14 09:42:02 crc kubenswrapper[4869]: I0314 09:42:02.381011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" event={"ID":"72a3866f-8a94-4c92-bf28-c86ae06c677b","Type":"ContainerDied","Data":"73854500fab37c9a4a5c48dae7fc2041d64e44c6e99ffca659f073948a2ba003"} Mar 14 09:42:03 crc kubenswrapper[4869]: I0314 09:42:03.768549 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:03 crc kubenswrapper[4869]: I0314 09:42:03.952204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5dt2\" (UniqueName: \"kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2\") pod \"72a3866f-8a94-4c92-bf28-c86ae06c677b\" (UID: \"72a3866f-8a94-4c92-bf28-c86ae06c677b\") " Mar 14 09:42:03 crc kubenswrapper[4869]: I0314 09:42:03.958057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2" (OuterVolumeSpecName: "kube-api-access-n5dt2") pod "72a3866f-8a94-4c92-bf28-c86ae06c677b" (UID: "72a3866f-8a94-4c92-bf28-c86ae06c677b"). InnerVolumeSpecName "kube-api-access-n5dt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.055155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5dt2\" (UniqueName: \"kubernetes.io/projected/72a3866f-8a94-4c92-bf28-c86ae06c677b-kube-api-access-n5dt2\") on node \"crc\" DevicePath \"\"" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.399371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" event={"ID":"72a3866f-8a94-4c92-bf28-c86ae06c677b","Type":"ContainerDied","Data":"c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b"} Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.399643 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78bd071a02f40dff6b7754247cf69521b09dc15d3bfdd3b9a630a89f050dc2b" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.399451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558022-vgmfn" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.538961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.539170 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.842773 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-8f2wm"] Mar 14 09:42:04 crc kubenswrapper[4869]: I0314 09:42:04.852424 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558016-8f2wm"] Mar 14 09:42:05 crc kubenswrapper[4869]: I0314 09:42:05.716167 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="716760de-3737-4911-bf17-2e731be929cd" path="/var/lib/kubelet/pods/716760de-3737-4911-bf17-2e731be929cd/volumes" Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.323255 4869 scope.go:117] "RemoveContainer" containerID="efae33106513458e6c2e84bb707e138b5298a6bc82b7195a14620c6ee0fe2879" Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.417112 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" exitCode=1 Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.417162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312"} Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.417198 4869 scope.go:117] "RemoveContainer" containerID="68910472a5c9da62bcb5ef603be092dc2ab062c5baed59b74274400e1994ba08" Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.418211 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:06 crc kubenswrapper[4869]: E0314 09:42:06.418577 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:06 crc kubenswrapper[4869]: I0314 09:42:06.704567 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:42:06 crc kubenswrapper[4869]: E0314 09:42:06.704857 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:42:14 crc kubenswrapper[4869]: I0314 09:42:14.538726 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:42:14 crc kubenswrapper[4869]: I0314 09:42:14.541230 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:42:14 crc kubenswrapper[4869]: I0314 09:42:14.542140 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:14 crc kubenswrapper[4869]: E0314 09:42:14.542461 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:15 crc kubenswrapper[4869]: I0314 09:42:15.554696 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:15 crc kubenswrapper[4869]: E0314 09:42:15.555016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:17 crc kubenswrapper[4869]: I0314 09:42:17.710843 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:42:17 crc kubenswrapper[4869]: E0314 09:42:17.712127 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:42:26 crc kubenswrapper[4869]: I0314 09:42:26.704885 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:26 crc kubenswrapper[4869]: E0314 09:42:26.705841 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:31 crc kubenswrapper[4869]: I0314 09:42:31.704244 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:42:32 crc kubenswrapper[4869]: I0314 09:42:32.713389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f"} Mar 14 09:42:34 crc kubenswrapper[4869]: I0314 09:42:34.405366 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:42:34 crc kubenswrapper[4869]: I0314 09:42:34.405733 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:42:39 crc kubenswrapper[4869]: I0314 09:42:39.806055 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" exitCode=1 Mar 14 09:42:39 crc kubenswrapper[4869]: I0314 09:42:39.806133 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f"} Mar 14 09:42:39 crc kubenswrapper[4869]: I0314 09:42:39.806680 4869 scope.go:117] "RemoveContainer" containerID="20cceb4b83f51a0639abdcda2117462c5b0ac6a930583621808aa4aa5e924005" Mar 14 09:42:39 crc kubenswrapper[4869]: I0314 09:42:39.807453 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:42:39 crc kubenswrapper[4869]: E0314 09:42:39.807700 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:42:40 crc kubenswrapper[4869]: I0314 09:42:40.704086 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:40 crc kubenswrapper[4869]: E0314 09:42:40.704687 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:44 crc kubenswrapper[4869]: I0314 09:42:44.404403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:42:44 crc kubenswrapper[4869]: I0314 09:42:44.404769 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:42:44 crc kubenswrapper[4869]: I0314 09:42:44.405463 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:42:44 crc kubenswrapper[4869]: E0314 09:42:44.405731 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:42:54 crc kubenswrapper[4869]: I0314 09:42:54.703499 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:42:54 crc kubenswrapper[4869]: E0314 09:42:54.704288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:42:58 crc kubenswrapper[4869]: I0314 09:42:58.703429 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:42:58 crc kubenswrapper[4869]: E0314 09:42:58.704223 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:43:09 crc kubenswrapper[4869]: I0314 09:43:09.704477 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:43:09 crc kubenswrapper[4869]: I0314 09:43:09.705245 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:43:09 crc kubenswrapper[4869]: E0314 09:43:09.705488 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:43:09 crc kubenswrapper[4869]: E0314 09:43:09.705631 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:43:23 crc kubenswrapper[4869]: I0314 09:43:23.704369 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:43:23 crc kubenswrapper[4869]: I0314 09:43:23.705760 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:43:23 crc kubenswrapper[4869]: E0314 09:43:23.705800 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:43:23 crc kubenswrapper[4869]: E0314 09:43:23.706141 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:43:35 crc kubenswrapper[4869]: I0314 09:43:35.704453 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:43:35 crc kubenswrapper[4869]: E0314 09:43:35.705471 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:43:38 crc kubenswrapper[4869]: I0314 09:43:38.705085 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:43:38 crc kubenswrapper[4869]: E0314 09:43:38.705845 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:43:39 crc kubenswrapper[4869]: I0314 09:43:39.604870 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:43:39 crc kubenswrapper[4869]: I0314 09:43:39.604982 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:43:49 crc kubenswrapper[4869]: I0314 09:43:49.703899 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:43:49 crc kubenswrapper[4869]: E0314 09:43:49.705851 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:43:52 crc kubenswrapper[4869]: I0314 09:43:52.704454 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:43:52 crc kubenswrapper[4869]: E0314 09:43:52.706123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.145668 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558024-qzv7l"] Mar 14 09:44:00 crc kubenswrapper[4869]: E0314 09:44:00.146776 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a3866f-8a94-4c92-bf28-c86ae06c677b" containerName="oc" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.146792 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a3866f-8a94-4c92-bf28-c86ae06c677b" containerName="oc" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.147008 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a3866f-8a94-4c92-bf28-c86ae06c677b" containerName="oc" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.147916 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.150551 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.151089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.151093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.165019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-qzv7l"] Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.244203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vwn\" (UniqueName: \"kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn\") pod \"auto-csr-approver-29558024-qzv7l\" (UID: \"eb2efd49-a138-45a3-87f7-d811d7fc100a\") " pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.345998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vwn\" (UniqueName: \"kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn\") pod \"auto-csr-approver-29558024-qzv7l\" (UID: \"eb2efd49-a138-45a3-87f7-d811d7fc100a\") " pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.373394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vwn\" (UniqueName: \"kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn\") pod \"auto-csr-approver-29558024-qzv7l\" (UID: \"eb2efd49-a138-45a3-87f7-d811d7fc100a\") " pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.470070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:00 crc kubenswrapper[4869]: I0314 09:44:00.956953 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-qzv7l"] Mar 14 09:44:01 crc kubenswrapper[4869]: I0314 09:44:01.590818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" event={"ID":"eb2efd49-a138-45a3-87f7-d811d7fc100a","Type":"ContainerStarted","Data":"7f11a4f4d7462a66046323a17295b64031940be9b814cb3d3843d4a738613008"} Mar 14 09:44:02 crc kubenswrapper[4869]: I0314 09:44:02.606797 4869 generic.go:334] "Generic (PLEG): container finished" podID="eb2efd49-a138-45a3-87f7-d811d7fc100a" containerID="9cd30d2c132dbfdcf9273aff04123eceacccf7644f783581c86c301666f41618" exitCode=0 Mar 14 09:44:02 crc kubenswrapper[4869]: I0314 09:44:02.607173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" event={"ID":"eb2efd49-a138-45a3-87f7-d811d7fc100a","Type":"ContainerDied","Data":"9cd30d2c132dbfdcf9273aff04123eceacccf7644f783581c86c301666f41618"} Mar 14 09:44:02 crc kubenswrapper[4869]: I0314 09:44:02.706229 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:44:02 crc kubenswrapper[4869]: E0314 09:44:02.706688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:44:03 crc kubenswrapper[4869]: I0314 09:44:03.704736 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:44:03 crc kubenswrapper[4869]: E0314 09:44:03.705273 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.015377 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.140325 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76vwn\" (UniqueName: \"kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn\") pod \"eb2efd49-a138-45a3-87f7-d811d7fc100a\" (UID: \"eb2efd49-a138-45a3-87f7-d811d7fc100a\") " Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.147637 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn" (OuterVolumeSpecName: "kube-api-access-76vwn") pod "eb2efd49-a138-45a3-87f7-d811d7fc100a" (UID: "eb2efd49-a138-45a3-87f7-d811d7fc100a"). InnerVolumeSpecName "kube-api-access-76vwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.242396 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76vwn\" (UniqueName: \"kubernetes.io/projected/eb2efd49-a138-45a3-87f7-d811d7fc100a-kube-api-access-76vwn\") on node \"crc\" DevicePath \"\"" Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.628667 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" event={"ID":"eb2efd49-a138-45a3-87f7-d811d7fc100a","Type":"ContainerDied","Data":"7f11a4f4d7462a66046323a17295b64031940be9b814cb3d3843d4a738613008"} Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.628925 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f11a4f4d7462a66046323a17295b64031940be9b814cb3d3843d4a738613008" Mar 14 09:44:04 crc kubenswrapper[4869]: I0314 09:44:04.628755 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558024-qzv7l" Mar 14 09:44:05 crc kubenswrapper[4869]: I0314 09:44:05.085633 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-q64kr"] Mar 14 09:44:05 crc kubenswrapper[4869]: I0314 09:44:05.094402 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558018-q64kr"] Mar 14 09:44:05 crc kubenswrapper[4869]: I0314 09:44:05.716481 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1362cc-66c4-4c5f-a45e-29eefe172762" path="/var/lib/kubelet/pods/cd1362cc-66c4-4c5f-a45e-29eefe172762/volumes" Mar 14 09:44:06 crc kubenswrapper[4869]: I0314 09:44:06.406436 4869 scope.go:117] "RemoveContainer" containerID="6477206c4a9ba42b95be6a23539f7da18ff48dd67b970f52ad9aeee87883de98" Mar 14 09:44:09 crc kubenswrapper[4869]: I0314 09:44:09.605112 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:44:09 crc kubenswrapper[4869]: I0314 09:44:09.606026 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:44:16 crc kubenswrapper[4869]: I0314 09:44:16.704327 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:44:16 crc kubenswrapper[4869]: E0314 09:44:16.705119 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:44:18 crc kubenswrapper[4869]: I0314 09:44:18.704743 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:44:18 crc kubenswrapper[4869]: E0314 09:44:18.705257 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:27 crc kubenswrapper[4869]: I0314 09:44:27.713087 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:44:27 crc kubenswrapper[4869]: E0314 09:44:27.714020 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:44:30 crc kubenswrapper[4869]: I0314 09:44:30.704648 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:44:30 crc kubenswrapper[4869]: E0314 09:44:30.705658 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.605635 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.608419 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.608625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.610048 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.610209 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7" gracePeriod=600 Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.704147 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:44:39 crc kubenswrapper[4869]: E0314 09:44:39.704560 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.948212 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7" exitCode=0 Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.948275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7"} Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.948623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e"} Mar 14 09:44:39 crc kubenswrapper[4869]: I0314 09:44:39.948644 4869 scope.go:117] "RemoveContainer" containerID="a5f408e40c7ae313fda2c6fc85ff09a300bda6b06c9e5508734ae3b2107cf6bd" Mar 14 09:44:41 crc kubenswrapper[4869]: I0314 09:44:41.704219 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:44:41 crc kubenswrapper[4869]: E0314 09:44:41.705204 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.668525 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:44:47 crc kubenswrapper[4869]: E0314 09:44:47.670692 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2efd49-a138-45a3-87f7-d811d7fc100a" containerName="oc" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.670777 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2efd49-a138-45a3-87f7-d811d7fc100a" containerName="oc" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.671035 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2efd49-a138-45a3-87f7-d811d7fc100a" containerName="oc" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.672435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.681849 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.796385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khsrm\" (UniqueName: \"kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.796813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.797113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.898800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khsrm\" (UniqueName: \"kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.898897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.898946 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.899339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.899356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:47 crc kubenswrapper[4869]: I0314 09:44:47.924231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khsrm\" (UniqueName: \"kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm\") pod \"community-operators-xfrt2\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:48 crc kubenswrapper[4869]: I0314 09:44:48.001528 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:48 crc kubenswrapper[4869]: I0314 09:44:48.548019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:44:49 crc kubenswrapper[4869]: I0314 09:44:49.081234 4869 generic.go:334] "Generic (PLEG): container finished" podID="242aa090-55ba-434d-9557-c10267e198fb" containerID="01b9e93d47ecf80005148328e11a330499ac633128f88668803d3a8d78af0979" exitCode=0 Mar 14 09:44:49 crc kubenswrapper[4869]: I0314 09:44:49.081288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerDied","Data":"01b9e93d47ecf80005148328e11a330499ac633128f88668803d3a8d78af0979"} Mar 14 09:44:49 crc kubenswrapper[4869]: I0314 09:44:49.081598 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerStarted","Data":"a18e2e0260553897a027169913300f519df1547bd7457ec1ec44a447392c1fdf"} Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.107005 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerStarted","Data":"1d0b34a9ccbf05b5293007f54ea9b35a6c59ce9864ff6323a337943cf8c3b6f5"} Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.621200 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.624213 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.639850 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.758461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.758547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8gq\" (UniqueName: \"kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.758574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.860702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.860781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.860801 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8gq\" (UniqueName: \"kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.861371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.861415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:50 crc kubenswrapper[4869]: I0314 09:44:50.881646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8gq\" (UniqueName: \"kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq\") pod \"redhat-operators-khxb8\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:51 crc kubenswrapper[4869]: I0314 09:44:51.000233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:44:51 crc kubenswrapper[4869]: I0314 09:44:51.132092 4869 generic.go:334] "Generic (PLEG): container finished" podID="242aa090-55ba-434d-9557-c10267e198fb" containerID="1d0b34a9ccbf05b5293007f54ea9b35a6c59ce9864ff6323a337943cf8c3b6f5" exitCode=0 Mar 14 09:44:51 crc kubenswrapper[4869]: I0314 09:44:51.132449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerDied","Data":"1d0b34a9ccbf05b5293007f54ea9b35a6c59ce9864ff6323a337943cf8c3b6f5"} Mar 14 09:44:51 crc kubenswrapper[4869]: I0314 09:44:51.500362 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.145896 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerStarted","Data":"0fd7f6f4a3312b331d4110a29d96ebaeecb92b2c04d3a1f9c7782e6f526baf61"} Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.147828 4869 generic.go:334] "Generic (PLEG): container finished" podID="a30a908d-491a-4069-b609-a2929b616dc2" containerID="536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c" exitCode=0 Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.147984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerDied","Data":"536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c"} Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.148113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerStarted","Data":"8d9fc98431aece3d315bf305269c7236f53c3cc79eddadc13ae5380f91e5352c"} Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.166762 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfrt2" podStartSLOduration=2.649725709 podStartE2EDuration="5.166741726s" podCreationTimestamp="2026-03-14 09:44:47 +0000 UTC" firstStartedPulling="2026-03-14 09:44:49.087226015 +0000 UTC m=+2842.059508068" lastFinishedPulling="2026-03-14 09:44:51.604242032 +0000 UTC m=+2844.576524085" observedRunningTime="2026-03-14 09:44:52.165364912 +0000 UTC m=+2845.137646975" watchObservedRunningTime="2026-03-14 09:44:52.166741726 +0000 UTC m=+2845.139023779" Mar 14 09:44:52 crc kubenswrapper[4869]: I0314 09:44:52.703887 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:44:52 crc kubenswrapper[4869]: E0314 09:44:52.704703 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:44:54 crc kubenswrapper[4869]: I0314 09:44:54.174761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerStarted","Data":"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690"} Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.833494 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.837954 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.855267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.974777 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.974827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:55 crc kubenswrapper[4869]: I0314 09:44:55.974871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hnl\" (UniqueName: \"kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.077024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.077078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.077121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hnl\" (UniqueName: \"kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.077959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.077974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.097534 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hnl\" (UniqueName: \"kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl\") pod \"certified-operators-5s8v5\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.165895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.226773 4869 generic.go:334] "Generic (PLEG): container finished" podID="a30a908d-491a-4069-b609-a2929b616dc2" containerID="fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690" exitCode=0 Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.226830 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerDied","Data":"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690"} Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.704970 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:44:56 crc kubenswrapper[4869]: E0314 09:44:56.705797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:44:56 crc kubenswrapper[4869]: I0314 09:44:56.853182 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:44:57 crc kubenswrapper[4869]: I0314 09:44:57.235200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerStarted","Data":"79a9445eed31b3e89fa8c49aec4b0e73cbb7708f43b7d226210d2a8431859aea"} Mar 14 09:44:57 crc kubenswrapper[4869]: E0314 09:44:57.447638 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57648155_302e_452f_b595_49a6146de92f.slice/crio-conmon-1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50.scope\": RecentStats: unable to find data in memory cache]" Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.002051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.002496 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.055742 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.247283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerStarted","Data":"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9"} Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.249907 4869 generic.go:334] "Generic (PLEG): container finished" podID="57648155-302e-452f-b595-49a6146de92f" containerID="1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50" exitCode=0 Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.249964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerDied","Data":"1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50"} Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.273709 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khxb8" podStartSLOduration=2.7450417 podStartE2EDuration="8.27368843s" podCreationTimestamp="2026-03-14 09:44:50 +0000 UTC" firstStartedPulling="2026-03-14 09:44:52.149110724 +0000 UTC m=+2845.121392777" lastFinishedPulling="2026-03-14 09:44:57.677757454 +0000 UTC m=+2850.650039507" observedRunningTime="2026-03-14 09:44:58.267964139 +0000 UTC m=+2851.240246202" watchObservedRunningTime="2026-03-14 09:44:58.27368843 +0000 UTC m=+2851.245970483" Mar 14 09:44:58 crc kubenswrapper[4869]: I0314 09:44:58.314448 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:44:59 crc kubenswrapper[4869]: I0314 09:44:59.261589 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerStarted","Data":"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2"} Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.172386 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7"] Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.176041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.179271 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.179930 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.185811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7"] Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.270667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.270860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.270892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm59f\" (UniqueName: \"kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.273736 4869 generic.go:334] "Generic (PLEG): container finished" podID="57648155-302e-452f-b595-49a6146de92f" containerID="bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2" exitCode=0 Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.273785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerDied","Data":"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2"} Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.372498 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.372600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm59f\" (UniqueName: \"kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.372639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.373924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.381168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.396453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm59f\" (UniqueName: \"kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f\") pod \"collect-profiles-29558025-p99s7\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.505148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:00 crc kubenswrapper[4869]: I0314 09:45:00.890240 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7"] Mar 14 09:45:00 crc kubenswrapper[4869]: W0314 09:45:00.894816 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac653a3_515a_4633_bef8_e32694085b95.slice/crio-32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82 WatchSource:0}: Error finding container 32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82: Status 404 returned error can't find the container with id 32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82 Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.000452 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.000503 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.305280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" event={"ID":"dac653a3-515a-4633-bef8-e32694085b95","Type":"ContainerStarted","Data":"9457afa2feec048d628d2656df394bdc091320d22b7d08d442997ce803c57044"} Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.305753 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" event={"ID":"dac653a3-515a-4633-bef8-e32694085b95","Type":"ContainerStarted","Data":"32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82"} Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.315091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerStarted","Data":"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c"} Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.342737 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" podStartSLOduration=1.342707583 podStartE2EDuration="1.342707583s" podCreationTimestamp="2026-03-14 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 09:45:01.331439226 +0000 UTC m=+2854.303721289" watchObservedRunningTime="2026-03-14 09:45:01.342707583 +0000 UTC m=+2854.314989636" Mar 14 09:45:01 crc kubenswrapper[4869]: I0314 09:45:01.406751 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5s8v5" podStartSLOduration=3.863123706 podStartE2EDuration="6.406726526s" podCreationTimestamp="2026-03-14 09:44:55 +0000 UTC" firstStartedPulling="2026-03-14 09:44:58.263587462 +0000 UTC m=+2851.235869535" lastFinishedPulling="2026-03-14 09:45:00.807190302 +0000 UTC m=+2853.779472355" observedRunningTime="2026-03-14 09:45:01.401853906 +0000 UTC m=+2854.374135959" watchObservedRunningTime="2026-03-14 09:45:01.406726526 +0000 UTC m=+2854.379008589" Mar 14 09:45:02 crc kubenswrapper[4869]: I0314 09:45:02.064836 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khxb8" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" probeResult="failure" output=< Mar 14 09:45:02 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:45:02 crc kubenswrapper[4869]: > Mar 14 09:45:02 crc kubenswrapper[4869]: I0314 09:45:02.330281 4869 generic.go:334] "Generic (PLEG): container finished" podID="dac653a3-515a-4633-bef8-e32694085b95" containerID="9457afa2feec048d628d2656df394bdc091320d22b7d08d442997ce803c57044" exitCode=0 Mar 14 09:45:02 crc kubenswrapper[4869]: I0314 09:45:02.330371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" event={"ID":"dac653a3-515a-4633-bef8-e32694085b95","Type":"ContainerDied","Data":"9457afa2feec048d628d2656df394bdc091320d22b7d08d442997ce803c57044"} Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.211937 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.212526 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xfrt2" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="registry-server" containerID="cri-o://0fd7f6f4a3312b331d4110a29d96ebaeecb92b2c04d3a1f9c7782e6f526baf61" gracePeriod=2 Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.341371 4869 generic.go:334] "Generic (PLEG): container finished" podID="242aa090-55ba-434d-9557-c10267e198fb" containerID="0fd7f6f4a3312b331d4110a29d96ebaeecb92b2c04d3a1f9c7782e6f526baf61" exitCode=0 Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.341421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerDied","Data":"0fd7f6f4a3312b331d4110a29d96ebaeecb92b2c04d3a1f9c7782e6f526baf61"} Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.723672 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.730029 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.849857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content\") pod \"242aa090-55ba-434d-9557-c10267e198fb\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.850288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khsrm\" (UniqueName: \"kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm\") pod \"242aa090-55ba-434d-9557-c10267e198fb\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.850323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities\") pod \"242aa090-55ba-434d-9557-c10267e198fb\" (UID: \"242aa090-55ba-434d-9557-c10267e198fb\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.850419 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume\") pod \"dac653a3-515a-4633-bef8-e32694085b95\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.850558 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm59f\" (UniqueName: \"kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f\") pod \"dac653a3-515a-4633-bef8-e32694085b95\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.850632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume\") pod \"dac653a3-515a-4633-bef8-e32694085b95\" (UID: \"dac653a3-515a-4633-bef8-e32694085b95\") " Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.852893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities" (OuterVolumeSpecName: "utilities") pod "242aa090-55ba-434d-9557-c10267e198fb" (UID: "242aa090-55ba-434d-9557-c10267e198fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.853392 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume" (OuterVolumeSpecName: "config-volume") pod "dac653a3-515a-4633-bef8-e32694085b95" (UID: "dac653a3-515a-4633-bef8-e32694085b95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.857845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dac653a3-515a-4633-bef8-e32694085b95" (UID: "dac653a3-515a-4633-bef8-e32694085b95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.858022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f" (OuterVolumeSpecName: "kube-api-access-zm59f") pod "dac653a3-515a-4633-bef8-e32694085b95" (UID: "dac653a3-515a-4633-bef8-e32694085b95"). InnerVolumeSpecName "kube-api-access-zm59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.858683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm" (OuterVolumeSpecName: "kube-api-access-khsrm") pod "242aa090-55ba-434d-9557-c10267e198fb" (UID: "242aa090-55ba-434d-9557-c10267e198fb"). InnerVolumeSpecName "kube-api-access-khsrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.900480 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "242aa090-55ba-434d-9557-c10267e198fb" (UID: "242aa090-55ba-434d-9557-c10267e198fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952849 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm59f\" (UniqueName: \"kubernetes.io/projected/dac653a3-515a-4633-bef8-e32694085b95-kube-api-access-zm59f\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952881 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac653a3-515a-4633-bef8-e32694085b95-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952890 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952898 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khsrm\" (UniqueName: \"kubernetes.io/projected/242aa090-55ba-434d-9557-c10267e198fb-kube-api-access-khsrm\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952907 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242aa090-55ba-434d-9557-c10267e198fb-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:03 crc kubenswrapper[4869]: I0314 09:45:03.952915 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dac653a3-515a-4633-bef8-e32694085b95-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.354269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfrt2" event={"ID":"242aa090-55ba-434d-9557-c10267e198fb","Type":"ContainerDied","Data":"a18e2e0260553897a027169913300f519df1547bd7457ec1ec44a447392c1fdf"} Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.354328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfrt2" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.354344 4869 scope.go:117] "RemoveContainer" containerID="0fd7f6f4a3312b331d4110a29d96ebaeecb92b2c04d3a1f9c7782e6f526baf61" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.362046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" event={"ID":"dac653a3-515a-4633-bef8-e32694085b95","Type":"ContainerDied","Data":"32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82"} Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.362093 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ce7e6bef650d0405caf297132173c2442ed848210c41846e211a51e3f1ae82" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.362143 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558025-p99s7" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.401235 4869 scope.go:117] "RemoveContainer" containerID="1d0b34a9ccbf05b5293007f54ea9b35a6c59ce9864ff6323a337943cf8c3b6f5" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.421557 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.427558 4869 scope.go:117] "RemoveContainer" containerID="01b9e93d47ecf80005148328e11a330499ac633128f88668803d3a8d78af0979" Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.429609 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xfrt2"] Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.438680 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m"] Mar 14 09:45:04 crc kubenswrapper[4869]: I0314 09:45:04.446400 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557980-h585m"] Mar 14 09:45:05 crc kubenswrapper[4869]: I0314 09:45:05.717036 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242aa090-55ba-434d-9557-c10267e198fb" path="/var/lib/kubelet/pods/242aa090-55ba-434d-9557-c10267e198fb/volumes" Mar 14 09:45:05 crc kubenswrapper[4869]: I0314 09:45:05.720934 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e245ff0-c737-4c36-aaad-f79c24030113" path="/var/lib/kubelet/pods/7e245ff0-c737-4c36-aaad-f79c24030113/volumes" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.166961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.167349 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.228580 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.438600 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.505445 4869 scope.go:117] "RemoveContainer" containerID="932577e865feba107d8a6f5f38eb8b43a074fe7e15cc5c0ff3190af7e9f2ce9c" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.704436 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:45:06 crc kubenswrapper[4869]: E0314 09:45:06.705000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:45:06 crc kubenswrapper[4869]: I0314 09:45:06.811798 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.400240 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5s8v5" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="registry-server" containerID="cri-o://0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c" gracePeriod=2 Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.705094 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:45:08 crc kubenswrapper[4869]: E0314 09:45:08.705427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.917028 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.958398 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hnl\" (UniqueName: \"kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl\") pod \"57648155-302e-452f-b595-49a6146de92f\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.958634 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content\") pod \"57648155-302e-452f-b595-49a6146de92f\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.958702 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities\") pod \"57648155-302e-452f-b595-49a6146de92f\" (UID: \"57648155-302e-452f-b595-49a6146de92f\") " Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.959867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities" (OuterVolumeSpecName: "utilities") pod "57648155-302e-452f-b595-49a6146de92f" (UID: "57648155-302e-452f-b595-49a6146de92f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:08 crc kubenswrapper[4869]: I0314 09:45:08.964712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl" (OuterVolumeSpecName: "kube-api-access-m4hnl") pod "57648155-302e-452f-b595-49a6146de92f" (UID: "57648155-302e-452f-b595-49a6146de92f"). InnerVolumeSpecName "kube-api-access-m4hnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.014591 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57648155-302e-452f-b595-49a6146de92f" (UID: "57648155-302e-452f-b595-49a6146de92f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.062631 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.062708 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hnl\" (UniqueName: \"kubernetes.io/projected/57648155-302e-452f-b595-49a6146de92f-kube-api-access-m4hnl\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.062725 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57648155-302e-452f-b595-49a6146de92f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.415115 4869 generic.go:334] "Generic (PLEG): container finished" podID="57648155-302e-452f-b595-49a6146de92f" containerID="0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c" exitCode=0 Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.415171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerDied","Data":"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c"} Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.415190 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s8v5" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.415205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s8v5" event={"ID":"57648155-302e-452f-b595-49a6146de92f","Type":"ContainerDied","Data":"79a9445eed31b3e89fa8c49aec4b0e73cbb7708f43b7d226210d2a8431859aea"} Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.415229 4869 scope.go:117] "RemoveContainer" containerID="0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.453396 4869 scope.go:117] "RemoveContainer" containerID="bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.454926 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.463088 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5s8v5"] Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.481642 4869 scope.go:117] "RemoveContainer" containerID="1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.523329 4869 scope.go:117] "RemoveContainer" containerID="0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c" Mar 14 09:45:09 crc kubenswrapper[4869]: E0314 09:45:09.523916 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c\": container with ID starting with 0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c not found: ID does not exist" containerID="0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.523973 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c"} err="failed to get container status \"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c\": rpc error: code = NotFound desc = could not find container \"0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c\": container with ID starting with 0400899626cc7ff49867e0271c9c7db47d67ec19ab3c485da742aebe7f3d591c not found: ID does not exist" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.523998 4869 scope.go:117] "RemoveContainer" containerID="bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2" Mar 14 09:45:09 crc kubenswrapper[4869]: E0314 09:45:09.524301 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2\": container with ID starting with bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2 not found: ID does not exist" containerID="bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.524349 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2"} err="failed to get container status \"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2\": rpc error: code = NotFound desc = could not find container \"bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2\": container with ID starting with bf27e07ac5a6c68e616be5c82516c8d57099592297dd677cd18ff56be3694ff2 not found: ID does not exist" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.524366 4869 scope.go:117] "RemoveContainer" containerID="1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50" Mar 14 09:45:09 crc kubenswrapper[4869]: E0314 09:45:09.524842 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50\": container with ID starting with 1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50 not found: ID does not exist" containerID="1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.524896 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50"} err="failed to get container status \"1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50\": rpc error: code = NotFound desc = could not find container \"1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50\": container with ID starting with 1dae9792cc297cad0e3c78497e1d52724ad79bccaa3b0622a9b735fa98752c50 not found: ID does not exist" Mar 14 09:45:09 crc kubenswrapper[4869]: I0314 09:45:09.755062 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57648155-302e-452f-b595-49a6146de92f" path="/var/lib/kubelet/pods/57648155-302e-452f-b595-49a6146de92f/volumes" Mar 14 09:45:12 crc kubenswrapper[4869]: I0314 09:45:12.048692 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khxb8" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" probeResult="failure" output=< Mar 14 09:45:12 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:45:12 crc kubenswrapper[4869]: > Mar 14 09:45:19 crc kubenswrapper[4869]: I0314 09:45:19.704606 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:45:19 crc kubenswrapper[4869]: E0314 09:45:19.705385 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:45:21 crc kubenswrapper[4869]: I0314 09:45:21.044098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:21 crc kubenswrapper[4869]: I0314 09:45:21.103437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:21 crc kubenswrapper[4869]: I0314 09:45:21.832254 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:45:22 crc kubenswrapper[4869]: I0314 09:45:22.527717 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khxb8" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" containerID="cri-o://c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9" gracePeriod=2 Mar 14 09:45:22 crc kubenswrapper[4869]: I0314 09:45:22.703958 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:45:22 crc kubenswrapper[4869]: E0314 09:45:22.704187 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.029965 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.198779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content\") pod \"a30a908d-491a-4069-b609-a2929b616dc2\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.198842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities\") pod \"a30a908d-491a-4069-b609-a2929b616dc2\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.198918 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8gq\" (UniqueName: \"kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq\") pod \"a30a908d-491a-4069-b609-a2929b616dc2\" (UID: \"a30a908d-491a-4069-b609-a2929b616dc2\") " Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.200237 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities" (OuterVolumeSpecName: "utilities") pod "a30a908d-491a-4069-b609-a2929b616dc2" (UID: "a30a908d-491a-4069-b609-a2929b616dc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.204277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq" (OuterVolumeSpecName: "kube-api-access-ps8gq") pod "a30a908d-491a-4069-b609-a2929b616dc2" (UID: "a30a908d-491a-4069-b609-a2929b616dc2"). InnerVolumeSpecName "kube-api-access-ps8gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.301656 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.301698 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8gq\" (UniqueName: \"kubernetes.io/projected/a30a908d-491a-4069-b609-a2929b616dc2-kube-api-access-ps8gq\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.339260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a30a908d-491a-4069-b609-a2929b616dc2" (UID: "a30a908d-491a-4069-b609-a2929b616dc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.403377 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a30a908d-491a-4069-b609-a2929b616dc2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.536408 4869 generic.go:334] "Generic (PLEG): container finished" podID="a30a908d-491a-4069-b609-a2929b616dc2" containerID="c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9" exitCode=0 Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.536482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerDied","Data":"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9"} Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.536491 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khxb8" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.536548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khxb8" event={"ID":"a30a908d-491a-4069-b609-a2929b616dc2","Type":"ContainerDied","Data":"8d9fc98431aece3d315bf305269c7236f53c3cc79eddadc13ae5380f91e5352c"} Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.536566 4869 scope.go:117] "RemoveContainer" containerID="c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.559043 4869 scope.go:117] "RemoveContainer" containerID="fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.575432 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.587389 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khxb8"] Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.588471 4869 scope.go:117] "RemoveContainer" containerID="536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.630086 4869 scope.go:117] "RemoveContainer" containerID="c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9" Mar 14 09:45:23 crc kubenswrapper[4869]: E0314 09:45:23.630415 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9\": container with ID starting with c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9 not found: ID does not exist" containerID="c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.630446 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9"} err="failed to get container status \"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9\": rpc error: code = NotFound desc = could not find container \"c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9\": container with ID starting with c9d42e304bf524fe88a774bcef53a02aeedbf0c74c94e817fc7ecfa55d364ea9 not found: ID does not exist" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.630475 4869 scope.go:117] "RemoveContainer" containerID="fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690" Mar 14 09:45:23 crc kubenswrapper[4869]: E0314 09:45:23.630705 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690\": container with ID starting with fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690 not found: ID does not exist" containerID="fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.630730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690"} err="failed to get container status \"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690\": rpc error: code = NotFound desc = could not find container \"fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690\": container with ID starting with fb7ee7c7c834eada20db28fad729fa55b895c18a4d6ac303ce5d317b415e6690 not found: ID does not exist" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.630748 4869 scope.go:117] "RemoveContainer" containerID="536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c" Mar 14 09:45:23 crc kubenswrapper[4869]: E0314 09:45:23.631017 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c\": container with ID starting with 536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c not found: ID does not exist" containerID="536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.631038 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c"} err="failed to get container status \"536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c\": rpc error: code = NotFound desc = could not find container \"536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c\": container with ID starting with 536c90c37053d981478019838b58b82255b8ce364434f157e38de73aa159ce2c not found: ID does not exist" Mar 14 09:45:23 crc kubenswrapper[4869]: I0314 09:45:23.714849 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30a908d-491a-4069-b609-a2929b616dc2" path="/var/lib/kubelet/pods/a30a908d-491a-4069-b609-a2929b616dc2/volumes" Mar 14 09:45:33 crc kubenswrapper[4869]: I0314 09:45:33.705608 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:45:33 crc kubenswrapper[4869]: E0314 09:45:33.706566 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.602538 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603302 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603316 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603337 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603345 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603352 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603359 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603368 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603375 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603397 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac653a3-515a-4633-bef8-e32694085b95" containerName="collect-profiles" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603403 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac653a3-515a-4633-bef8-e32694085b95" containerName="collect-profiles" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603412 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603418 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603429 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603448 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603454 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603466 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603471 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="extract-utilities" Mar 14 09:45:35 crc kubenswrapper[4869]: E0314 09:45:35.603481 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603486 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="extract-content" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603694 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="242aa090-55ba-434d-9557-c10267e198fb" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603711 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30a908d-491a-4069-b609-a2929b616dc2" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603727 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac653a3-515a-4633-bef8-e32694085b95" containerName="collect-profiles" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.603742 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57648155-302e-452f-b595-49a6146de92f" containerName="registry-server" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.605099 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.615057 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.755491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.755638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.755732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84z6k\" (UniqueName: \"kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.857436 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.858033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.858167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84z6k\" (UniqueName: \"kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.858395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.859770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.888218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84z6k\" (UniqueName: \"kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k\") pod \"redhat-marketplace-ftwvz\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:35 crc kubenswrapper[4869]: I0314 09:45:35.925983 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:36 crc kubenswrapper[4869]: I0314 09:45:36.565258 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:36 crc kubenswrapper[4869]: I0314 09:45:36.676756 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerStarted","Data":"fcb3df03e715e5572799d858764a9d5fb6d575cd01a799080fb4aa308a7f0228"} Mar 14 09:45:36 crc kubenswrapper[4869]: I0314 09:45:36.704067 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:45:36 crc kubenswrapper[4869]: E0314 09:45:36.704340 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:45:37 crc kubenswrapper[4869]: I0314 09:45:37.688268 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerID="e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418" exitCode=0 Mar 14 09:45:37 crc kubenswrapper[4869]: I0314 09:45:37.688379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerDied","Data":"e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418"} Mar 14 09:45:38 crc kubenswrapper[4869]: I0314 09:45:38.700680 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerID="942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98" exitCode=0 Mar 14 09:45:38 crc kubenswrapper[4869]: I0314 09:45:38.700791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerDied","Data":"942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98"} Mar 14 09:45:39 crc kubenswrapper[4869]: I0314 09:45:39.716825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerStarted","Data":"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b"} Mar 14 09:45:39 crc kubenswrapper[4869]: I0314 09:45:39.749150 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ftwvz" podStartSLOduration=3.3566374740000002 podStartE2EDuration="4.749132574s" podCreationTimestamp="2026-03-14 09:45:35 +0000 UTC" firstStartedPulling="2026-03-14 09:45:37.69069065 +0000 UTC m=+2890.662972703" lastFinishedPulling="2026-03-14 09:45:39.08318574 +0000 UTC m=+2892.055467803" observedRunningTime="2026-03-14 09:45:39.742627335 +0000 UTC m=+2892.714909388" watchObservedRunningTime="2026-03-14 09:45:39.749132574 +0000 UTC m=+2892.721414627" Mar 14 09:45:45 crc kubenswrapper[4869]: I0314 09:45:45.926765 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:45 crc kubenswrapper[4869]: I0314 09:45:45.927501 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:45 crc kubenswrapper[4869]: I0314 09:45:45.988001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:46 crc kubenswrapper[4869]: I0314 09:45:46.704229 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:45:46 crc kubenswrapper[4869]: E0314 09:45:46.704641 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:45:46 crc kubenswrapper[4869]: I0314 09:45:46.837705 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:46 crc kubenswrapper[4869]: I0314 09:45:46.894699 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:48 crc kubenswrapper[4869]: I0314 09:45:48.789455 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ftwvz" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="registry-server" containerID="cri-o://0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b" gracePeriod=2 Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.231821 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.318943 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content\") pod \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.319084 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84z6k\" (UniqueName: \"kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k\") pod \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.319112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities\") pod \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\" (UID: \"a2dff94d-4244-41d5-8e09-a9c05ee34ff4\") " Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.320147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities" (OuterVolumeSpecName: "utilities") pod "a2dff94d-4244-41d5-8e09-a9c05ee34ff4" (UID: "a2dff94d-4244-41d5-8e09-a9c05ee34ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.326119 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k" (OuterVolumeSpecName: "kube-api-access-84z6k") pod "a2dff94d-4244-41d5-8e09-a9c05ee34ff4" (UID: "a2dff94d-4244-41d5-8e09-a9c05ee34ff4"). InnerVolumeSpecName "kube-api-access-84z6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.403006 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2dff94d-4244-41d5-8e09-a9c05ee34ff4" (UID: "a2dff94d-4244-41d5-8e09-a9c05ee34ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.421936 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.422244 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84z6k\" (UniqueName: \"kubernetes.io/projected/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-kube-api-access-84z6k\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.422376 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2dff94d-4244-41d5-8e09-a9c05ee34ff4-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.801064 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerID="0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b" exitCode=0 Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.801104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerDied","Data":"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b"} Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.801131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ftwvz" event={"ID":"a2dff94d-4244-41d5-8e09-a9c05ee34ff4","Type":"ContainerDied","Data":"fcb3df03e715e5572799d858764a9d5fb6d575cd01a799080fb4aa308a7f0228"} Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.801150 4869 scope.go:117] "RemoveContainer" containerID="0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.801143 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ftwvz" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.827033 4869 scope.go:117] "RemoveContainer" containerID="942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.829119 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.838555 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ftwvz"] Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.848112 4869 scope.go:117] "RemoveContainer" containerID="e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.898224 4869 scope.go:117] "RemoveContainer" containerID="0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b" Mar 14 09:45:49 crc kubenswrapper[4869]: E0314 09:45:49.898843 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b\": container with ID starting with 0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b not found: ID does not exist" containerID="0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.898883 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b"} err="failed to get container status \"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b\": rpc error: code = NotFound desc = could not find container \"0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b\": container with ID starting with 0d1f614158d78ff3b23a0db5e66e1514bc3dc8d18844e435795390bd52c0332b not found: ID does not exist" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.898909 4869 scope.go:117] "RemoveContainer" containerID="942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98" Mar 14 09:45:49 crc kubenswrapper[4869]: E0314 09:45:49.899306 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98\": container with ID starting with 942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98 not found: ID does not exist" containerID="942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.899336 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98"} err="failed to get container status \"942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98\": rpc error: code = NotFound desc = could not find container \"942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98\": container with ID starting with 942dc745670718668cf478fced9234018b201ce2c5f26fda1b352f009cd7dc98 not found: ID does not exist" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.899354 4869 scope.go:117] "RemoveContainer" containerID="e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418" Mar 14 09:45:49 crc kubenswrapper[4869]: E0314 09:45:49.904007 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418\": container with ID starting with e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418 not found: ID does not exist" containerID="e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418" Mar 14 09:45:49 crc kubenswrapper[4869]: I0314 09:45:49.904055 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418"} err="failed to get container status \"e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418\": rpc error: code = NotFound desc = could not find container \"e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418\": container with ID starting with e568f7b3da58fe3607ab764648caf8fd269851c60d6a5ce99f204c103a802418 not found: ID does not exist" Mar 14 09:45:50 crc kubenswrapper[4869]: I0314 09:45:50.704529 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:45:50 crc kubenswrapper[4869]: E0314 09:45:50.704761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:45:51 crc kubenswrapper[4869]: I0314 09:45:51.717190 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" path="/var/lib/kubelet/pods/a2dff94d-4244-41d5-8e09-a9c05ee34ff4/volumes" Mar 14 09:45:59 crc kubenswrapper[4869]: I0314 09:45:59.703640 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:45:59 crc kubenswrapper[4869]: E0314 09:45:59.706437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.150666 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558026-9jkcw"] Mar 14 09:46:00 crc kubenswrapper[4869]: E0314 09:46:00.151123 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.151162 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="extract-content" Mar 14 09:46:00 crc kubenswrapper[4869]: E0314 09:46:00.151186 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.151195 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4869]: E0314 09:46:00.151213 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.151219 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="extract-utilities" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.151452 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2dff94d-4244-41d5-8e09-a9c05ee34ff4" containerName="registry-server" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.152573 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.157052 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.157191 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.157383 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.161803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-9jkcw"] Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.248284 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrrc\" (UniqueName: \"kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc\") pod \"auto-csr-approver-29558026-9jkcw\" (UID: \"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3\") " pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.350771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrrc\" (UniqueName: \"kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc\") pod \"auto-csr-approver-29558026-9jkcw\" (UID: \"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3\") " pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.372407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrrc\" (UniqueName: \"kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc\") pod \"auto-csr-approver-29558026-9jkcw\" (UID: \"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3\") " pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.479842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:00 crc kubenswrapper[4869]: I0314 09:46:00.968861 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-9jkcw"] Mar 14 09:46:01 crc kubenswrapper[4869]: I0314 09:46:01.932732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" event={"ID":"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3","Type":"ContainerStarted","Data":"f59a10fbf425aff452f6e6ad69e922d9dda5a3b0a3c49c11b02a7f4744c5a482"} Mar 14 09:46:02 crc kubenswrapper[4869]: I0314 09:46:02.945207 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" containerID="e0ddcec22477297e01d156bd72265133e2f78b27ffb4ccca041e472acc843585" exitCode=0 Mar 14 09:46:02 crc kubenswrapper[4869]: I0314 09:46:02.945418 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" event={"ID":"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3","Type":"ContainerDied","Data":"e0ddcec22477297e01d156bd72265133e2f78b27ffb4ccca041e472acc843585"} Mar 14 09:46:03 crc kubenswrapper[4869]: I0314 09:46:03.704580 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:46:03 crc kubenswrapper[4869]: E0314 09:46:03.705038 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.313766 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.361834 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrrc\" (UniqueName: \"kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc\") pod \"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3\" (UID: \"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3\") " Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.369213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc" (OuterVolumeSpecName: "kube-api-access-rsrrc") pod "d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" (UID: "d1bf67dc-605f-482a-ab8d-67f2e7ef76f3"). InnerVolumeSpecName "kube-api-access-rsrrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.464593 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrrc\" (UniqueName: \"kubernetes.io/projected/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3-kube-api-access-rsrrc\") on node \"crc\" DevicePath \"\"" Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.978824 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" event={"ID":"d1bf67dc-605f-482a-ab8d-67f2e7ef76f3","Type":"ContainerDied","Data":"f59a10fbf425aff452f6e6ad69e922d9dda5a3b0a3c49c11b02a7f4744c5a482"} Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.978888 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59a10fbf425aff452f6e6ad69e922d9dda5a3b0a3c49c11b02a7f4744c5a482" Mar 14 09:46:04 crc kubenswrapper[4869]: I0314 09:46:04.978938 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558026-9jkcw" Mar 14 09:46:05 crc kubenswrapper[4869]: I0314 09:46:05.384492 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-ksrcm"] Mar 14 09:46:05 crc kubenswrapper[4869]: I0314 09:46:05.392238 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558020-ksrcm"] Mar 14 09:46:05 crc kubenswrapper[4869]: I0314 09:46:05.715673 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fda3e65-ac70-412f-aa48-ed55a48c4774" path="/var/lib/kubelet/pods/6fda3e65-ac70-412f-aa48-ed55a48c4774/volumes" Mar 14 09:46:06 crc kubenswrapper[4869]: I0314 09:46:06.575421 4869 scope.go:117] "RemoveContainer" containerID="1dd86f6ad1be40932867bf321d0df6cc0685194318cd511353ed4cb59efeb9f4" Mar 14 09:46:13 crc kubenswrapper[4869]: I0314 09:46:13.703804 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:46:13 crc kubenswrapper[4869]: E0314 09:46:13.704600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:46:17 crc kubenswrapper[4869]: I0314 09:46:17.710796 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:46:17 crc kubenswrapper[4869]: E0314 09:46:17.711442 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:46:25 crc kubenswrapper[4869]: I0314 09:46:25.704230 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:46:25 crc kubenswrapper[4869]: E0314 09:46:25.705852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:46:30 crc kubenswrapper[4869]: I0314 09:46:30.704226 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:46:30 crc kubenswrapper[4869]: E0314 09:46:30.705060 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:46:37 crc kubenswrapper[4869]: I0314 09:46:37.709724 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:46:37 crc kubenswrapper[4869]: E0314 09:46:37.710670 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:46:39 crc kubenswrapper[4869]: I0314 09:46:39.605092 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:46:39 crc kubenswrapper[4869]: I0314 09:46:39.605438 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:46:45 crc kubenswrapper[4869]: I0314 09:46:45.704708 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:46:45 crc kubenswrapper[4869]: E0314 09:46:45.705482 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:46:50 crc kubenswrapper[4869]: I0314 09:46:50.704393 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:46:50 crc kubenswrapper[4869]: E0314 09:46:50.705172 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:47:00 crc kubenswrapper[4869]: I0314 09:47:00.704804 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:47:00 crc kubenswrapper[4869]: E0314 09:47:00.705737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:47:05 crc kubenswrapper[4869]: I0314 09:47:05.707994 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:47:05 crc kubenswrapper[4869]: E0314 09:47:05.708728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:47:09 crc kubenswrapper[4869]: I0314 09:47:09.605471 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:47:09 crc kubenswrapper[4869]: I0314 09:47:09.606093 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:47:14 crc kubenswrapper[4869]: I0314 09:47:14.704278 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:47:14 crc kubenswrapper[4869]: E0314 09:47:14.704914 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:47:17 crc kubenswrapper[4869]: I0314 09:47:17.713006 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:47:18 crc kubenswrapper[4869]: I0314 09:47:18.691472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0"} Mar 14 09:47:24 crc kubenswrapper[4869]: I0314 09:47:24.539324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:47:24 crc kubenswrapper[4869]: I0314 09:47:24.540034 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:47:25 crc kubenswrapper[4869]: I0314 09:47:25.704435 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:47:25 crc kubenswrapper[4869]: E0314 09:47:25.705007 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:47:26 crc kubenswrapper[4869]: I0314 09:47:26.793949 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" exitCode=1 Mar 14 09:47:26 crc kubenswrapper[4869]: I0314 09:47:26.794039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0"} Mar 14 09:47:26 crc kubenswrapper[4869]: I0314 09:47:26.794346 4869 scope.go:117] "RemoveContainer" containerID="0d0e1e8b35a71cc9ae1be6583ff475428fb17e386e16fdb8c89d083bfedb9312" Mar 14 09:47:26 crc kubenswrapper[4869]: I0314 09:47:26.795839 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:47:26 crc kubenswrapper[4869]: E0314 09:47:26.796254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:47:34 crc kubenswrapper[4869]: I0314 09:47:34.538854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:47:34 crc kubenswrapper[4869]: I0314 09:47:34.539441 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:47:34 crc kubenswrapper[4869]: I0314 09:47:34.540304 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:47:34 crc kubenswrapper[4869]: E0314 09:47:34.540593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.605753 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.606452 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.606621 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.607847 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.607992 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" gracePeriod=600 Mar 14 09:47:39 crc kubenswrapper[4869]: E0314 09:47:39.744078 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.920283 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" exitCode=0 Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.920361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e"} Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.920674 4869 scope.go:117] "RemoveContainer" containerID="c9cd0c2cf419477519bc0c878cc4d31fa5648d9e139fd0661ec36ae8f1c04dd7" Mar 14 09:47:39 crc kubenswrapper[4869]: I0314 09:47:39.921391 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:47:39 crc kubenswrapper[4869]: E0314 09:47:39.922005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:47:40 crc kubenswrapper[4869]: I0314 09:47:40.704141 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:47:40 crc kubenswrapper[4869]: I0314 09:47:40.932677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de"} Mar 14 09:47:44 crc kubenswrapper[4869]: I0314 09:47:44.404371 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:47:44 crc kubenswrapper[4869]: I0314 09:47:44.404949 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:47:47 crc kubenswrapper[4869]: I0314 09:47:47.714051 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:47:47 crc kubenswrapper[4869]: E0314 09:47:47.715420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:47:50 crc kubenswrapper[4869]: I0314 09:47:50.042799 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" exitCode=1 Mar 14 09:47:50 crc kubenswrapper[4869]: I0314 09:47:50.042917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de"} Mar 14 09:47:50 crc kubenswrapper[4869]: I0314 09:47:50.043316 4869 scope.go:117] "RemoveContainer" containerID="37746fe6cc8061d5f0609c16fea40ae663c69a573c13b7942c6ee983e4ec2a0f" Mar 14 09:47:50 crc kubenswrapper[4869]: I0314 09:47:50.044884 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:47:50 crc kubenswrapper[4869]: E0314 09:47:50.045343 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:47:54 crc kubenswrapper[4869]: I0314 09:47:54.404377 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:47:54 crc kubenswrapper[4869]: I0314 09:47:54.405010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:47:54 crc kubenswrapper[4869]: I0314 09:47:54.406311 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:47:54 crc kubenswrapper[4869]: E0314 09:47:54.406665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:47:54 crc kubenswrapper[4869]: I0314 09:47:54.704640 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:47:54 crc kubenswrapper[4869]: E0314 09:47:54.705252 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.160692 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558028-zrk96"] Mar 14 09:48:00 crc kubenswrapper[4869]: E0314 09:48:00.161960 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.161984 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.162398 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" containerName="oc" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.163786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.166907 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.167185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.167322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.174477 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-zrk96"] Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.334480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxwb\" (UniqueName: \"kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb\") pod \"auto-csr-approver-29558028-zrk96\" (UID: \"f71c6b9a-8ca6-434d-a48c-f269906d3ba8\") " pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.436747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxwb\" (UniqueName: \"kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb\") pod \"auto-csr-approver-29558028-zrk96\" (UID: \"f71c6b9a-8ca6-434d-a48c-f269906d3ba8\") " pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.458910 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxwb\" (UniqueName: \"kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb\") pod \"auto-csr-approver-29558028-zrk96\" (UID: \"f71c6b9a-8ca6-434d-a48c-f269906d3ba8\") " pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.492414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:00 crc kubenswrapper[4869]: I0314 09:48:00.999409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-zrk96"] Mar 14 09:48:01 crc kubenswrapper[4869]: I0314 09:48:01.014608 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:48:01 crc kubenswrapper[4869]: I0314 09:48:01.182981 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-zrk96" event={"ID":"f71c6b9a-8ca6-434d-a48c-f269906d3ba8","Type":"ContainerStarted","Data":"9129fb0fa453630bbca791085f3d00e89f992c99544f6e4e6acee8190a010f4b"} Mar 14 09:48:02 crc kubenswrapper[4869]: I0314 09:48:02.704849 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:48:02 crc kubenswrapper[4869]: E0314 09:48:02.707004 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:48:03 crc kubenswrapper[4869]: I0314 09:48:03.202602 4869 generic.go:334] "Generic (PLEG): container finished" podID="f71c6b9a-8ca6-434d-a48c-f269906d3ba8" containerID="63d0a6010475880d8b05984e2d03ac5a5e7e8c920f0c9e007cdba5088dcca272" exitCode=0 Mar 14 09:48:03 crc kubenswrapper[4869]: I0314 09:48:03.202646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-zrk96" event={"ID":"f71c6b9a-8ca6-434d-a48c-f269906d3ba8","Type":"ContainerDied","Data":"63d0a6010475880d8b05984e2d03ac5a5e7e8c920f0c9e007cdba5088dcca272"} Mar 14 09:48:04 crc kubenswrapper[4869]: I0314 09:48:04.592647 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:04 crc kubenswrapper[4869]: I0314 09:48:04.733142 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvxwb\" (UniqueName: \"kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb\") pod \"f71c6b9a-8ca6-434d-a48c-f269906d3ba8\" (UID: \"f71c6b9a-8ca6-434d-a48c-f269906d3ba8\") " Mar 14 09:48:04 crc kubenswrapper[4869]: I0314 09:48:04.740814 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb" (OuterVolumeSpecName: "kube-api-access-xvxwb") pod "f71c6b9a-8ca6-434d-a48c-f269906d3ba8" (UID: "f71c6b9a-8ca6-434d-a48c-f269906d3ba8"). InnerVolumeSpecName "kube-api-access-xvxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:48:04 crc kubenswrapper[4869]: I0314 09:48:04.835617 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvxwb\" (UniqueName: \"kubernetes.io/projected/f71c6b9a-8ca6-434d-a48c-f269906d3ba8-kube-api-access-xvxwb\") on node \"crc\" DevicePath \"\"" Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.225207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558028-zrk96" event={"ID":"f71c6b9a-8ca6-434d-a48c-f269906d3ba8","Type":"ContainerDied","Data":"9129fb0fa453630bbca791085f3d00e89f992c99544f6e4e6acee8190a010f4b"} Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.225252 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9129fb0fa453630bbca791085f3d00e89f992c99544f6e4e6acee8190a010f4b" Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.225296 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558028-zrk96" Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.682338 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-vgmfn"] Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.694078 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558022-vgmfn"] Mar 14 09:48:05 crc kubenswrapper[4869]: I0314 09:48:05.715845 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a3866f-8a94-4c92-bf28-c86ae06c677b" path="/var/lib/kubelet/pods/72a3866f-8a94-4c92-bf28-c86ae06c677b/volumes" Mar 14 09:48:06 crc kubenswrapper[4869]: I0314 09:48:06.704203 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:48:06 crc kubenswrapper[4869]: E0314 09:48:06.704658 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:48:06 crc kubenswrapper[4869]: I0314 09:48:06.727301 4869 scope.go:117] "RemoveContainer" containerID="73854500fab37c9a4a5c48dae7fc2041d64e44c6e99ffca659f073948a2ba003" Mar 14 09:48:09 crc kubenswrapper[4869]: I0314 09:48:09.705081 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:48:09 crc kubenswrapper[4869]: E0314 09:48:09.705590 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:48:13 crc kubenswrapper[4869]: I0314 09:48:13.704413 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:48:13 crc kubenswrapper[4869]: E0314 09:48:13.705277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:48:21 crc kubenswrapper[4869]: I0314 09:48:21.704548 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:48:21 crc kubenswrapper[4869]: E0314 09:48:21.705303 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:48:22 crc kubenswrapper[4869]: I0314 09:48:22.704111 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:48:22 crc kubenswrapper[4869]: E0314 09:48:22.704628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:48:27 crc kubenswrapper[4869]: I0314 09:48:27.713109 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:48:27 crc kubenswrapper[4869]: E0314 09:48:27.713928 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:48:34 crc kubenswrapper[4869]: I0314 09:48:34.704503 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:48:34 crc kubenswrapper[4869]: E0314 09:48:34.705940 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:48:36 crc kubenswrapper[4869]: I0314 09:48:36.704446 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:48:36 crc kubenswrapper[4869]: E0314 09:48:36.705860 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:48:39 crc kubenswrapper[4869]: I0314 09:48:39.705706 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:48:39 crc kubenswrapper[4869]: E0314 09:48:39.706239 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:48:45 crc kubenswrapper[4869]: I0314 09:48:45.704276 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:48:45 crc kubenswrapper[4869]: E0314 09:48:45.706954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:48:47 crc kubenswrapper[4869]: I0314 09:48:47.710261 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:48:47 crc kubenswrapper[4869]: E0314 09:48:47.711396 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:48:50 crc kubenswrapper[4869]: I0314 09:48:50.704634 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:48:50 crc kubenswrapper[4869]: E0314 09:48:50.705453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:48:57 crc kubenswrapper[4869]: I0314 09:48:57.712031 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:48:57 crc kubenswrapper[4869]: E0314 09:48:57.712999 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:48:59 crc kubenswrapper[4869]: I0314 09:48:59.703768 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:48:59 crc kubenswrapper[4869]: E0314 09:48:59.704067 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:49:03 crc kubenswrapper[4869]: I0314 09:49:03.704396 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:49:03 crc kubenswrapper[4869]: E0314 09:49:03.705746 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:49:10 crc kubenswrapper[4869]: I0314 09:49:10.705494 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:49:10 crc kubenswrapper[4869]: E0314 09:49:10.706679 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:49:14 crc kubenswrapper[4869]: I0314 09:49:14.704095 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:49:14 crc kubenswrapper[4869]: E0314 09:49:14.704909 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:49:15 crc kubenswrapper[4869]: I0314 09:49:15.704941 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:49:15 crc kubenswrapper[4869]: E0314 09:49:15.705423 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:49:22 crc kubenswrapper[4869]: I0314 09:49:22.703881 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:49:22 crc kubenswrapper[4869]: E0314 09:49:22.705203 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:49:27 crc kubenswrapper[4869]: I0314 09:49:27.716227 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:49:27 crc kubenswrapper[4869]: E0314 09:49:27.717224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:49:29 crc kubenswrapper[4869]: I0314 09:49:29.704458 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:49:29 crc kubenswrapper[4869]: E0314 09:49:29.706594 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:49:34 crc kubenswrapper[4869]: I0314 09:49:34.703849 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:49:34 crc kubenswrapper[4869]: E0314 09:49:34.704595 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:49:38 crc kubenswrapper[4869]: I0314 09:49:38.704700 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:49:38 crc kubenswrapper[4869]: E0314 09:49:38.705273 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:49:43 crc kubenswrapper[4869]: I0314 09:49:43.704098 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:49:43 crc kubenswrapper[4869]: E0314 09:49:43.704887 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:49:48 crc kubenswrapper[4869]: I0314 09:49:48.704845 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:49:48 crc kubenswrapper[4869]: E0314 09:49:48.705724 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:49:50 crc kubenswrapper[4869]: I0314 09:49:50.704008 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:49:50 crc kubenswrapper[4869]: E0314 09:49:50.704527 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:49:58 crc kubenswrapper[4869]: I0314 09:49:58.704123 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:49:58 crc kubenswrapper[4869]: E0314 09:49:58.704858 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:49:59 crc kubenswrapper[4869]: I0314 09:49:59.704417 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:49:59 crc kubenswrapper[4869]: E0314 09:49:59.704941 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.159089 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558030-dgn88"] Mar 14 09:50:00 crc kubenswrapper[4869]: E0314 09:50:00.159628 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71c6b9a-8ca6-434d-a48c-f269906d3ba8" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.159650 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71c6b9a-8ca6-434d-a48c-f269906d3ba8" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.159847 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71c6b9a-8ca6-434d-a48c-f269906d3ba8" containerName="oc" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.160600 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.165187 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.165456 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.165655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.176743 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-dgn88"] Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.292749 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7dgv\" (UniqueName: \"kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv\") pod \"auto-csr-approver-29558030-dgn88\" (UID: \"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f\") " pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.395304 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7dgv\" (UniqueName: \"kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv\") pod \"auto-csr-approver-29558030-dgn88\" (UID: \"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f\") " pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.417444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7dgv\" (UniqueName: \"kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv\") pod \"auto-csr-approver-29558030-dgn88\" (UID: \"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f\") " pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:00 crc kubenswrapper[4869]: I0314 09:50:00.491713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:01 crc kubenswrapper[4869]: I0314 09:50:01.143596 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-dgn88"] Mar 14 09:50:01 crc kubenswrapper[4869]: I0314 09:50:01.354170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-dgn88" event={"ID":"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f","Type":"ContainerStarted","Data":"e587446d4b94a31f21b22e77def1c277156f766ded70e3a7dadc49cc99fdb6da"} Mar 14 09:50:03 crc kubenswrapper[4869]: I0314 09:50:03.376399 4869 generic.go:334] "Generic (PLEG): container finished" podID="7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" containerID="7d053bb427ad60ee8541e2e8e4447d4e5c3cb6e8aead438f03964f69bf13021e" exitCode=0 Mar 14 09:50:03 crc kubenswrapper[4869]: I0314 09:50:03.376454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-dgn88" event={"ID":"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f","Type":"ContainerDied","Data":"7d053bb427ad60ee8541e2e8e4447d4e5c3cb6e8aead438f03964f69bf13021e"} Mar 14 09:50:04 crc kubenswrapper[4869]: I0314 09:50:04.704077 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:50:04 crc kubenswrapper[4869]: E0314 09:50:04.704721 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:50:04 crc kubenswrapper[4869]: I0314 09:50:04.716460 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:04 crc kubenswrapper[4869]: I0314 09:50:04.897625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7dgv\" (UniqueName: \"kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv\") pod \"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f\" (UID: \"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f\") " Mar 14 09:50:04 crc kubenswrapper[4869]: I0314 09:50:04.903966 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv" (OuterVolumeSpecName: "kube-api-access-k7dgv") pod "7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" (UID: "7cf0393f-6ef4-4bf9-8d30-33da4902cf9f"). InnerVolumeSpecName "kube-api-access-k7dgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.000110 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7dgv\" (UniqueName: \"kubernetes.io/projected/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f-kube-api-access-k7dgv\") on node \"crc\" DevicePath \"\"" Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.408729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558030-dgn88" event={"ID":"7cf0393f-6ef4-4bf9-8d30-33da4902cf9f","Type":"ContainerDied","Data":"e587446d4b94a31f21b22e77def1c277156f766ded70e3a7dadc49cc99fdb6da"} Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.408772 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e587446d4b94a31f21b22e77def1c277156f766ded70e3a7dadc49cc99fdb6da" Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.408824 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558030-dgn88" Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.791371 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-qzv7l"] Mar 14 09:50:05 crc kubenswrapper[4869]: I0314 09:50:05.800483 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558024-qzv7l"] Mar 14 09:50:07 crc kubenswrapper[4869]: I0314 09:50:07.718485 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb2efd49-a138-45a3-87f7-d811d7fc100a" path="/var/lib/kubelet/pods/eb2efd49-a138-45a3-87f7-d811d7fc100a/volumes" Mar 14 09:50:11 crc kubenswrapper[4869]: I0314 09:50:11.704487 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:50:11 crc kubenswrapper[4869]: E0314 09:50:11.705420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:50:11 crc kubenswrapper[4869]: I0314 09:50:11.705915 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:50:11 crc kubenswrapper[4869]: E0314 09:50:11.706460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:50:16 crc kubenswrapper[4869]: I0314 09:50:16.705072 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:50:16 crc kubenswrapper[4869]: E0314 09:50:16.705909 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:50:23 crc kubenswrapper[4869]: I0314 09:50:23.703734 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:50:23 crc kubenswrapper[4869]: E0314 09:50:23.704765 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:50:23 crc kubenswrapper[4869]: I0314 09:50:23.706156 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:50:23 crc kubenswrapper[4869]: E0314 09:50:23.713333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:50:31 crc kubenswrapper[4869]: I0314 09:50:31.704596 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:50:31 crc kubenswrapper[4869]: E0314 09:50:31.705393 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:50:35 crc kubenswrapper[4869]: I0314 09:50:35.705252 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:50:35 crc kubenswrapper[4869]: E0314 09:50:35.706660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:50:38 crc kubenswrapper[4869]: I0314 09:50:38.704231 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:50:38 crc kubenswrapper[4869]: E0314 09:50:38.704849 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:50:42 crc kubenswrapper[4869]: I0314 09:50:42.704492 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:50:42 crc kubenswrapper[4869]: E0314 09:50:42.705320 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:50:47 crc kubenswrapper[4869]: I0314 09:50:47.713288 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:50:47 crc kubenswrapper[4869]: E0314 09:50:47.714377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:50:51 crc kubenswrapper[4869]: I0314 09:50:51.704827 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:50:51 crc kubenswrapper[4869]: E0314 09:50:51.705387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:50:55 crc kubenswrapper[4869]: I0314 09:50:55.706310 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:50:55 crc kubenswrapper[4869]: E0314 09:50:55.707213 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:51:00 crc kubenswrapper[4869]: I0314 09:51:00.704292 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:51:00 crc kubenswrapper[4869]: E0314 09:51:00.705461 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:51:05 crc kubenswrapper[4869]: I0314 09:51:05.704562 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:51:05 crc kubenswrapper[4869]: E0314 09:51:05.705438 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:51:06 crc kubenswrapper[4869]: I0314 09:51:06.840061 4869 scope.go:117] "RemoveContainer" containerID="9cd30d2c132dbfdcf9273aff04123eceacccf7644f783581c86c301666f41618" Mar 14 09:51:08 crc kubenswrapper[4869]: I0314 09:51:08.704037 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:51:08 crc kubenswrapper[4869]: E0314 09:51:08.704813 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:51:11 crc kubenswrapper[4869]: I0314 09:51:11.705203 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:51:11 crc kubenswrapper[4869]: E0314 09:51:11.706112 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:51:18 crc kubenswrapper[4869]: I0314 09:51:18.704734 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:51:18 crc kubenswrapper[4869]: E0314 09:51:18.705618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:51:19 crc kubenswrapper[4869]: I0314 09:51:19.703950 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:51:19 crc kubenswrapper[4869]: E0314 09:51:19.704540 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:51:26 crc kubenswrapper[4869]: I0314 09:51:26.704420 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:51:26 crc kubenswrapper[4869]: E0314 09:51:26.705163 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:51:32 crc kubenswrapper[4869]: I0314 09:51:32.705126 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:51:32 crc kubenswrapper[4869]: I0314 09:51:32.706074 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:51:32 crc kubenswrapper[4869]: E0314 09:51:32.706389 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:51:32 crc kubenswrapper[4869]: E0314 09:51:32.706422 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:51:41 crc kubenswrapper[4869]: I0314 09:51:41.705063 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:51:41 crc kubenswrapper[4869]: E0314 09:51:41.707087 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:51:43 crc kubenswrapper[4869]: I0314 09:51:43.703945 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:51:43 crc kubenswrapper[4869]: E0314 09:51:43.704311 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:51:45 crc kubenswrapper[4869]: I0314 09:51:45.704808 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:51:45 crc kubenswrapper[4869]: E0314 09:51:45.705195 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:51:53 crc kubenswrapper[4869]: I0314 09:51:53.704240 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:51:53 crc kubenswrapper[4869]: E0314 09:51:53.705733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:51:55 crc kubenswrapper[4869]: I0314 09:51:55.726739 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:51:55 crc kubenswrapper[4869]: E0314 09:51:55.730057 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:51:56 crc kubenswrapper[4869]: I0314 09:51:56.704783 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:51:56 crc kubenswrapper[4869]: E0314 09:51:56.705375 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.147486 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558032-85cfb"] Mar 14 09:52:00 crc kubenswrapper[4869]: E0314 09:52:00.148726 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" containerName="oc" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.148741 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" containerName="oc" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.148942 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" containerName="oc" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.149740 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.154206 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.154481 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.156145 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.158600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-85cfb"] Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.322694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6mc\" (UniqueName: \"kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc\") pod \"auto-csr-approver-29558032-85cfb\" (UID: \"451f8d82-2d10-4e49-9d47-b45773325a53\") " pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.424954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6mc\" (UniqueName: \"kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc\") pod \"auto-csr-approver-29558032-85cfb\" (UID: \"451f8d82-2d10-4e49-9d47-b45773325a53\") " pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.444082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6mc\" (UniqueName: \"kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc\") pod \"auto-csr-approver-29558032-85cfb\" (UID: \"451f8d82-2d10-4e49-9d47-b45773325a53\") " pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.474257 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:00 crc kubenswrapper[4869]: I0314 09:52:00.960081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-85cfb"] Mar 14 09:52:00 crc kubenswrapper[4869]: W0314 09:52:00.964642 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod451f8d82_2d10_4e49_9d47_b45773325a53.slice/crio-85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097 WatchSource:0}: Error finding container 85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097: Status 404 returned error can't find the container with id 85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097 Mar 14 09:52:01 crc kubenswrapper[4869]: I0314 09:52:01.554985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-85cfb" event={"ID":"451f8d82-2d10-4e49-9d47-b45773325a53","Type":"ContainerStarted","Data":"85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097"} Mar 14 09:52:02 crc kubenswrapper[4869]: I0314 09:52:02.566345 4869 generic.go:334] "Generic (PLEG): container finished" podID="451f8d82-2d10-4e49-9d47-b45773325a53" containerID="c04ae7a1cd6f63394b24a30f7ec3ff3b146f8fef8cd59e26f3fd22c8d87b30ca" exitCode=0 Mar 14 09:52:02 crc kubenswrapper[4869]: I0314 09:52:02.566476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-85cfb" event={"ID":"451f8d82-2d10-4e49-9d47-b45773325a53","Type":"ContainerDied","Data":"c04ae7a1cd6f63394b24a30f7ec3ff3b146f8fef8cd59e26f3fd22c8d87b30ca"} Mar 14 09:52:03 crc kubenswrapper[4869]: I0314 09:52:03.979093 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.105811 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s6mc\" (UniqueName: \"kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc\") pod \"451f8d82-2d10-4e49-9d47-b45773325a53\" (UID: \"451f8d82-2d10-4e49-9d47-b45773325a53\") " Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.115295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc" (OuterVolumeSpecName: "kube-api-access-6s6mc") pod "451f8d82-2d10-4e49-9d47-b45773325a53" (UID: "451f8d82-2d10-4e49-9d47-b45773325a53"). InnerVolumeSpecName "kube-api-access-6s6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.209158 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s6mc\" (UniqueName: \"kubernetes.io/projected/451f8d82-2d10-4e49-9d47-b45773325a53-kube-api-access-6s6mc\") on node \"crc\" DevicePath \"\"" Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.586223 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558032-85cfb" event={"ID":"451f8d82-2d10-4e49-9d47-b45773325a53","Type":"ContainerDied","Data":"85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097"} Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.586302 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a83c2e55fe050842f93c35ae7329dcf7ddb82e2a7e1d2f770fdd2d220ec097" Mar 14 09:52:04 crc kubenswrapper[4869]: I0314 09:52:04.586338 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558032-85cfb" Mar 14 09:52:05 crc kubenswrapper[4869]: I0314 09:52:05.058666 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-9jkcw"] Mar 14 09:52:05 crc kubenswrapper[4869]: I0314 09:52:05.068429 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558026-9jkcw"] Mar 14 09:52:05 crc kubenswrapper[4869]: I0314 09:52:05.704035 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:52:05 crc kubenswrapper[4869]: E0314 09:52:05.704853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:52:05 crc kubenswrapper[4869]: I0314 09:52:05.717640 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bf67dc-605f-482a-ab8d-67f2e7ef76f3" path="/var/lib/kubelet/pods/d1bf67dc-605f-482a-ab8d-67f2e7ef76f3/volumes" Mar 14 09:52:06 crc kubenswrapper[4869]: I0314 09:52:06.961182 4869 scope.go:117] "RemoveContainer" containerID="e0ddcec22477297e01d156bd72265133e2f78b27ffb4ccca041e472acc843585" Mar 14 09:52:09 crc kubenswrapper[4869]: I0314 09:52:09.704388 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:52:09 crc kubenswrapper[4869]: E0314 09:52:09.705409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:52:10 crc kubenswrapper[4869]: I0314 09:52:10.703670 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:52:10 crc kubenswrapper[4869]: E0314 09:52:10.704191 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:52:17 crc kubenswrapper[4869]: I0314 09:52:17.710188 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:52:17 crc kubenswrapper[4869]: E0314 09:52:17.710897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:52:20 crc kubenswrapper[4869]: I0314 09:52:20.704025 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:52:20 crc kubenswrapper[4869]: E0314 09:52:20.704753 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:52:24 crc kubenswrapper[4869]: I0314 09:52:24.704615 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:52:24 crc kubenswrapper[4869]: E0314 09:52:24.705675 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:52:31 crc kubenswrapper[4869]: I0314 09:52:31.704733 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:52:32 crc kubenswrapper[4869]: I0314 09:52:32.704568 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:52:32 crc kubenswrapper[4869]: E0314 09:52:32.705121 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:52:32 crc kubenswrapper[4869]: I0314 09:52:32.872031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894"} Mar 14 09:52:34 crc kubenswrapper[4869]: I0314 09:52:34.539456 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:52:34 crc kubenswrapper[4869]: I0314 09:52:34.539828 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:52:36 crc kubenswrapper[4869]: I0314 09:52:36.704090 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:52:36 crc kubenswrapper[4869]: E0314 09:52:36.704720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:52:40 crc kubenswrapper[4869]: I0314 09:52:40.949874 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" exitCode=1 Mar 14 09:52:40 crc kubenswrapper[4869]: I0314 09:52:40.949926 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894"} Mar 14 09:52:40 crc kubenswrapper[4869]: I0314 09:52:40.950272 4869 scope.go:117] "RemoveContainer" containerID="c3f6864c4ec8c35c664f309ddb936e85e9976624b2e76d7a3b2957dccb09e1a0" Mar 14 09:52:40 crc kubenswrapper[4869]: I0314 09:52:40.951139 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:52:40 crc kubenswrapper[4869]: E0314 09:52:40.951543 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:52:44 crc kubenswrapper[4869]: I0314 09:52:44.539015 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:52:44 crc kubenswrapper[4869]: I0314 09:52:44.539688 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:52:44 crc kubenswrapper[4869]: I0314 09:52:44.541136 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:52:44 crc kubenswrapper[4869]: E0314 09:52:44.541449 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:52:47 crc kubenswrapper[4869]: I0314 09:52:47.711702 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:52:47 crc kubenswrapper[4869]: E0314 09:52:47.712383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:52:50 crc kubenswrapper[4869]: I0314 09:52:50.703822 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:52:51 crc kubenswrapper[4869]: I0314 09:52:51.057198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a"} Mar 14 09:52:57 crc kubenswrapper[4869]: I0314 09:52:57.711918 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:52:57 crc kubenswrapper[4869]: E0314 09:52:57.713640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:53:01 crc kubenswrapper[4869]: I0314 09:53:01.705078 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:53:02 crc kubenswrapper[4869]: I0314 09:53:02.181497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e"} Mar 14 09:53:04 crc kubenswrapper[4869]: I0314 09:53:04.405159 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:53:04 crc kubenswrapper[4869]: I0314 09:53:04.405752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:53:09 crc kubenswrapper[4869]: I0314 09:53:09.713478 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:53:09 crc kubenswrapper[4869]: E0314 09:53:09.716338 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:53:11 crc kubenswrapper[4869]: I0314 09:53:11.280955 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" exitCode=1 Mar 14 09:53:11 crc kubenswrapper[4869]: I0314 09:53:11.281029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e"} Mar 14 09:53:11 crc kubenswrapper[4869]: I0314 09:53:11.281337 4869 scope.go:117] "RemoveContainer" containerID="ce970b3a162c42257407c1cc6c8a387e5f1ec9541d0b28fe743392011988a3de" Mar 14 09:53:11 crc kubenswrapper[4869]: I0314 09:53:11.283022 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:53:11 crc kubenswrapper[4869]: E0314 09:53:11.283614 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:53:14 crc kubenswrapper[4869]: I0314 09:53:14.404696 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:53:14 crc kubenswrapper[4869]: I0314 09:53:14.405061 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:53:14 crc kubenswrapper[4869]: I0314 09:53:14.405958 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:53:14 crc kubenswrapper[4869]: E0314 09:53:14.406403 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:53:24 crc kubenswrapper[4869]: I0314 09:53:24.705013 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:53:24 crc kubenswrapper[4869]: E0314 09:53:24.718759 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:53:28 crc kubenswrapper[4869]: I0314 09:53:28.704556 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:53:28 crc kubenswrapper[4869]: E0314 09:53:28.705500 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:53:36 crc kubenswrapper[4869]: I0314 09:53:36.704940 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:53:36 crc kubenswrapper[4869]: E0314 09:53:36.706181 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:53:40 crc kubenswrapper[4869]: I0314 09:53:40.704070 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:53:40 crc kubenswrapper[4869]: E0314 09:53:40.704792 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:53:50 crc kubenswrapper[4869]: I0314 09:53:50.704093 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:53:50 crc kubenswrapper[4869]: E0314 09:53:50.704898 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:53:55 crc kubenswrapper[4869]: I0314 09:53:55.704336 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:53:55 crc kubenswrapper[4869]: E0314 09:53:55.705589 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.192606 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558034-c6dwk"] Mar 14 09:54:00 crc kubenswrapper[4869]: E0314 09:54:00.193594 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451f8d82-2d10-4e49-9d47-b45773325a53" containerName="oc" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.193609 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="451f8d82-2d10-4e49-9d47-b45773325a53" containerName="oc" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.193891 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="451f8d82-2d10-4e49-9d47-b45773325a53" containerName="oc" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.194746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.199826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.199882 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.200031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.210835 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-c6dwk"] Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.260229 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9l4\" (UniqueName: \"kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4\") pod \"auto-csr-approver-29558034-c6dwk\" (UID: \"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11\") " pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.363970 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9l4\" (UniqueName: \"kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4\") pod \"auto-csr-approver-29558034-c6dwk\" (UID: \"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11\") " pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.404455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9l4\" (UniqueName: \"kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4\") pod \"auto-csr-approver-29558034-c6dwk\" (UID: \"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11\") " pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.519563 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.992662 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-c6dwk"] Mar 14 09:54:00 crc kubenswrapper[4869]: I0314 09:54:00.994796 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 09:54:01 crc kubenswrapper[4869]: I0314 09:54:01.807268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" event={"ID":"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11","Type":"ContainerStarted","Data":"10244a61a8eeb5b4cd5dbe2ee46633607ef83f16212560622764c1dca595e751"} Mar 14 09:54:02 crc kubenswrapper[4869]: I0314 09:54:02.822116 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" containerID="52ddfbb366d3fca98bd3c7abcc642112201e499a6a5c4590ca18d75bf514ad0a" exitCode=0 Mar 14 09:54:02 crc kubenswrapper[4869]: I0314 09:54:02.822321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" event={"ID":"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11","Type":"ContainerDied","Data":"52ddfbb366d3fca98bd3c7abcc642112201e499a6a5c4590ca18d75bf514ad0a"} Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.313706 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.449985 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px9l4\" (UniqueName: \"kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4\") pod \"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11\" (UID: \"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11\") " Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.457851 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4" (OuterVolumeSpecName: "kube-api-access-px9l4") pod "4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" (UID: "4c6f6c57-8eff-4c88-ad8d-a82f852aeb11"). InnerVolumeSpecName "kube-api-access-px9l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.553329 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px9l4\" (UniqueName: \"kubernetes.io/projected/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11-kube-api-access-px9l4\") on node \"crc\" DevicePath \"\"" Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.840776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" event={"ID":"4c6f6c57-8eff-4c88-ad8d-a82f852aeb11","Type":"ContainerDied","Data":"10244a61a8eeb5b4cd5dbe2ee46633607ef83f16212560622764c1dca595e751"} Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.841116 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10244a61a8eeb5b4cd5dbe2ee46633607ef83f16212560622764c1dca595e751" Mar 14 09:54:04 crc kubenswrapper[4869]: I0314 09:54:04.840874 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558034-c6dwk" Mar 14 09:54:05 crc kubenswrapper[4869]: I0314 09:54:05.412491 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-zrk96"] Mar 14 09:54:05 crc kubenswrapper[4869]: I0314 09:54:05.420604 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558028-zrk96"] Mar 14 09:54:05 crc kubenswrapper[4869]: I0314 09:54:05.703590 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:54:05 crc kubenswrapper[4869]: E0314 09:54:05.703822 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:54:05 crc kubenswrapper[4869]: I0314 09:54:05.716645 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71c6b9a-8ca6-434d-a48c-f269906d3ba8" path="/var/lib/kubelet/pods/f71c6b9a-8ca6-434d-a48c-f269906d3ba8/volumes" Mar 14 09:54:06 crc kubenswrapper[4869]: I0314 09:54:06.704146 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:54:06 crc kubenswrapper[4869]: E0314 09:54:06.704752 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:54:07 crc kubenswrapper[4869]: I0314 09:54:07.070687 4869 scope.go:117] "RemoveContainer" containerID="63d0a6010475880d8b05984e2d03ac5a5e7e8c920f0c9e007cdba5088dcca272" Mar 14 09:54:20 crc kubenswrapper[4869]: I0314 09:54:20.704736 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:54:20 crc kubenswrapper[4869]: E0314 09:54:20.706409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:54:21 crc kubenswrapper[4869]: I0314 09:54:21.705108 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:54:21 crc kubenswrapper[4869]: E0314 09:54:21.705354 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:54:32 crc kubenswrapper[4869]: I0314 09:54:32.704312 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:54:32 crc kubenswrapper[4869]: E0314 09:54:32.705322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:54:33 crc kubenswrapper[4869]: I0314 09:54:33.704952 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:54:33 crc kubenswrapper[4869]: E0314 09:54:33.705583 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:54:46 crc kubenswrapper[4869]: I0314 09:54:46.704907 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:54:46 crc kubenswrapper[4869]: E0314 09:54:46.706142 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:54:47 crc kubenswrapper[4869]: I0314 09:54:47.709806 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:54:47 crc kubenswrapper[4869]: E0314 09:54:47.710259 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:55:01 crc kubenswrapper[4869]: I0314 09:55:01.703534 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:55:01 crc kubenswrapper[4869]: I0314 09:55:01.704248 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:55:01 crc kubenswrapper[4869]: E0314 09:55:01.704307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:55:01 crc kubenswrapper[4869]: E0314 09:55:01.704703 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:55:09 crc kubenswrapper[4869]: I0314 09:55:09.605142 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:55:09 crc kubenswrapper[4869]: I0314 09:55:09.605729 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:55:15 crc kubenswrapper[4869]: I0314 09:55:15.704459 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:55:15 crc kubenswrapper[4869]: E0314 09:55:15.705081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:55:16 crc kubenswrapper[4869]: I0314 09:55:16.703881 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:55:16 crc kubenswrapper[4869]: E0314 09:55:16.704146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:55:29 crc kubenswrapper[4869]: I0314 09:55:29.704880 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:55:29 crc kubenswrapper[4869]: E0314 09:55:29.706172 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:55:29 crc kubenswrapper[4869]: I0314 09:55:29.706462 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:55:29 crc kubenswrapper[4869]: E0314 09:55:29.706705 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.875686 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:30 crc kubenswrapper[4869]: E0314 09:55:30.876231 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" containerName="oc" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.876250 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" containerName="oc" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.876476 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" containerName="oc" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.891889 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.892057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.929790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbxx\" (UniqueName: \"kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.929905 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:30 crc kubenswrapper[4869]: I0314 09:55:30.930048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.032703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbxx\" (UniqueName: \"kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.032801 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.032884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.033344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.033501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.059480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbxx\" (UniqueName: \"kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx\") pod \"certified-operators-vkq4k\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.218222 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:31 crc kubenswrapper[4869]: I0314 09:55:31.832071 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:32 crc kubenswrapper[4869]: I0314 09:55:32.712205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerDied","Data":"71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6"} Mar 14 09:55:32 crc kubenswrapper[4869]: I0314 09:55:32.711846 4869 generic.go:334] "Generic (PLEG): container finished" podID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerID="71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6" exitCode=0 Mar 14 09:55:32 crc kubenswrapper[4869]: I0314 09:55:32.712563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerStarted","Data":"0ee0e736b39ebde547d4dd7e81574024e74b8e677b1c20fd3f1dc7bed6cdea5c"} Mar 14 09:55:33 crc kubenswrapper[4869]: I0314 09:55:33.727303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerStarted","Data":"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b"} Mar 14 09:55:34 crc kubenswrapper[4869]: I0314 09:55:34.741153 4869 generic.go:334] "Generic (PLEG): container finished" podID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerID="714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b" exitCode=0 Mar 14 09:55:34 crc kubenswrapper[4869]: I0314 09:55:34.741201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerDied","Data":"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b"} Mar 14 09:55:35 crc kubenswrapper[4869]: I0314 09:55:35.754548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerStarted","Data":"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198"} Mar 14 09:55:35 crc kubenswrapper[4869]: I0314 09:55:35.783289 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkq4k" podStartSLOduration=3.288129788 podStartE2EDuration="5.783267821s" podCreationTimestamp="2026-03-14 09:55:30 +0000 UTC" firstStartedPulling="2026-03-14 09:55:32.713556075 +0000 UTC m=+3485.685838128" lastFinishedPulling="2026-03-14 09:55:35.208694068 +0000 UTC m=+3488.180976161" observedRunningTime="2026-03-14 09:55:35.777981631 +0000 UTC m=+3488.750263684" watchObservedRunningTime="2026-03-14 09:55:35.783267821 +0000 UTC m=+3488.755549874" Mar 14 09:55:39 crc kubenswrapper[4869]: I0314 09:55:39.605533 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:55:39 crc kubenswrapper[4869]: I0314 09:55:39.605884 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.219087 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.219458 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.272917 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.705222 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:55:41 crc kubenswrapper[4869]: E0314 09:55:41.705648 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.886697 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:41 crc kubenswrapper[4869]: I0314 09:55:41.954635 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:43 crc kubenswrapper[4869]: I0314 09:55:43.848981 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkq4k" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="registry-server" containerID="cri-o://e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198" gracePeriod=2 Mar 14 09:55:43 crc kubenswrapper[4869]: I0314 09:55:43.940748 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:55:43 crc kubenswrapper[4869]: I0314 09:55:43.944325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:43 crc kubenswrapper[4869]: I0314 09:55:43.970280 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.138958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.139253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrpr\" (UniqueName: \"kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.139611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.241469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.241536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrpr\" (UniqueName: \"kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.241643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.242083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.242306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.264706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lrpr\" (UniqueName: \"kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr\") pod \"redhat-operators-zl4mt\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.344961 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.373608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.445640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content\") pod \"341bcbb3-4368-494e-bef7-4b5efd630ae4\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.445691 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbxx\" (UniqueName: \"kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx\") pod \"341bcbb3-4368-494e-bef7-4b5efd630ae4\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.445810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities\") pod \"341bcbb3-4368-494e-bef7-4b5efd630ae4\" (UID: \"341bcbb3-4368-494e-bef7-4b5efd630ae4\") " Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.446977 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities" (OuterVolumeSpecName: "utilities") pod "341bcbb3-4368-494e-bef7-4b5efd630ae4" (UID: "341bcbb3-4368-494e-bef7-4b5efd630ae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.450148 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx" (OuterVolumeSpecName: "kube-api-access-dlbxx") pod "341bcbb3-4368-494e-bef7-4b5efd630ae4" (UID: "341bcbb3-4368-494e-bef7-4b5efd630ae4"). InnerVolumeSpecName "kube-api-access-dlbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.511369 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "341bcbb3-4368-494e-bef7-4b5efd630ae4" (UID: "341bcbb3-4368-494e-bef7-4b5efd630ae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.548628 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.548670 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/341bcbb3-4368-494e-bef7-4b5efd630ae4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.548682 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbxx\" (UniqueName: \"kubernetes.io/projected/341bcbb3-4368-494e-bef7-4b5efd630ae4-kube-api-access-dlbxx\") on node \"crc\" DevicePath \"\"" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.704045 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:55:44 crc kubenswrapper[4869]: E0314 09:55:44.704251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.865267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.873557 4869 generic.go:334] "Generic (PLEG): container finished" podID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerID="e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198" exitCode=0 Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.873602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerDied","Data":"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198"} Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.873632 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkq4k" event={"ID":"341bcbb3-4368-494e-bef7-4b5efd630ae4","Type":"ContainerDied","Data":"0ee0e736b39ebde547d4dd7e81574024e74b8e677b1c20fd3f1dc7bed6cdea5c"} Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.873654 4869 scope.go:117] "RemoveContainer" containerID="e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.873795 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkq4k" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.917453 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.919858 4869 scope.go:117] "RemoveContainer" containerID="714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.928016 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkq4k"] Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.959400 4869 scope.go:117] "RemoveContainer" containerID="71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.986351 4869 scope.go:117] "RemoveContainer" containerID="e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198" Mar 14 09:55:44 crc kubenswrapper[4869]: E0314 09:55:44.986913 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198\": container with ID starting with e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198 not found: ID does not exist" containerID="e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.986963 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198"} err="failed to get container status \"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198\": rpc error: code = NotFound desc = could not find container \"e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198\": container with ID starting with e098312f66b2295d1aa50f71bcbe8f44d1861b161c2a8817ebde4462f04cd198 not found: ID does not exist" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.986997 4869 scope.go:117] "RemoveContainer" containerID="714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b" Mar 14 09:55:44 crc kubenswrapper[4869]: E0314 09:55:44.987480 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b\": container with ID starting with 714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b not found: ID does not exist" containerID="714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.987546 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b"} err="failed to get container status \"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b\": rpc error: code = NotFound desc = could not find container \"714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b\": container with ID starting with 714e4e260cb85fd30de123bde6a9ec014d497322e50f1edbb81d6160dd4ec58b not found: ID does not exist" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.987576 4869 scope.go:117] "RemoveContainer" containerID="71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6" Mar 14 09:55:44 crc kubenswrapper[4869]: E0314 09:55:44.991923 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6\": container with ID starting with 71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6 not found: ID does not exist" containerID="71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6" Mar 14 09:55:44 crc kubenswrapper[4869]: I0314 09:55:44.991968 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6"} err="failed to get container status \"71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6\": rpc error: code = NotFound desc = could not find container \"71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6\": container with ID starting with 71459be1fd00e1981907bec30001394a74e34b672d40e6ff10d71cf9ce9f06b6 not found: ID does not exist" Mar 14 09:55:45 crc kubenswrapper[4869]: I0314 09:55:45.717214 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" path="/var/lib/kubelet/pods/341bcbb3-4368-494e-bef7-4b5efd630ae4/volumes" Mar 14 09:55:45 crc kubenswrapper[4869]: I0314 09:55:45.882440 4869 generic.go:334] "Generic (PLEG): container finished" podID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerID="5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362" exitCode=0 Mar 14 09:55:45 crc kubenswrapper[4869]: I0314 09:55:45.882527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerDied","Data":"5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362"} Mar 14 09:55:45 crc kubenswrapper[4869]: I0314 09:55:45.882562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerStarted","Data":"44e5aef7ac0724be66c7db22cf52d99262b6f7a9e77698b20cdf59ba516180c9"} Mar 14 09:55:46 crc kubenswrapper[4869]: I0314 09:55:46.894129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerStarted","Data":"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965"} Mar 14 09:55:47 crc kubenswrapper[4869]: I0314 09:55:47.903597 4869 generic.go:334] "Generic (PLEG): container finished" podID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerID="c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965" exitCode=0 Mar 14 09:55:47 crc kubenswrapper[4869]: I0314 09:55:47.903677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerDied","Data":"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965"} Mar 14 09:55:48 crc kubenswrapper[4869]: I0314 09:55:48.921760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerStarted","Data":"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92"} Mar 14 09:55:48 crc kubenswrapper[4869]: I0314 09:55:48.948061 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zl4mt" podStartSLOduration=3.50188978 podStartE2EDuration="5.948043389s" podCreationTimestamp="2026-03-14 09:55:43 +0000 UTC" firstStartedPulling="2026-03-14 09:55:45.884404712 +0000 UTC m=+3498.856686775" lastFinishedPulling="2026-03-14 09:55:48.330558331 +0000 UTC m=+3501.302840384" observedRunningTime="2026-03-14 09:55:48.937561611 +0000 UTC m=+3501.909843684" watchObservedRunningTime="2026-03-14 09:55:48.948043389 +0000 UTC m=+3501.920325442" Mar 14 09:55:54 crc kubenswrapper[4869]: I0314 09:55:54.373966 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:54 crc kubenswrapper[4869]: I0314 09:55:54.376264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:55:54 crc kubenswrapper[4869]: I0314 09:55:54.703472 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:55:54 crc kubenswrapper[4869]: E0314 09:55:54.704013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:55:55 crc kubenswrapper[4869]: I0314 09:55:55.425959 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zl4mt" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="registry-server" probeResult="failure" output=< Mar 14 09:55:55 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 09:55:55 crc kubenswrapper[4869]: > Mar 14 09:55:56 crc kubenswrapper[4869]: I0314 09:55:56.704355 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:55:56 crc kubenswrapper[4869]: E0314 09:55:56.706149 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.151697 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558036-mcscc"] Mar 14 09:56:00 crc kubenswrapper[4869]: E0314 09:56:00.152490 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="registry-server" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.152523 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="registry-server" Mar 14 09:56:00 crc kubenswrapper[4869]: E0314 09:56:00.152536 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="extract-utilities" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.152545 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="extract-utilities" Mar 14 09:56:00 crc kubenswrapper[4869]: E0314 09:56:00.152558 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="extract-content" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.152566 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="extract-content" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.152800 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="341bcbb3-4368-494e-bef7-4b5efd630ae4" containerName="registry-server" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.153587 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.155904 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.157212 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.157377 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.161529 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-mcscc"] Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.333249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmpc\" (UniqueName: \"kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc\") pod \"auto-csr-approver-29558036-mcscc\" (UID: \"313d2a07-63ff-47b7-9fda-0f7217fb33a7\") " pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.434953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmpc\" (UniqueName: \"kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc\") pod \"auto-csr-approver-29558036-mcscc\" (UID: \"313d2a07-63ff-47b7-9fda-0f7217fb33a7\") " pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.467481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmpc\" (UniqueName: \"kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc\") pod \"auto-csr-approver-29558036-mcscc\" (UID: \"313d2a07-63ff-47b7-9fda-0f7217fb33a7\") " pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.473759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:00 crc kubenswrapper[4869]: I0314 09:56:00.986652 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-mcscc"] Mar 14 09:56:01 crc kubenswrapper[4869]: I0314 09:56:01.020728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-mcscc" event={"ID":"313d2a07-63ff-47b7-9fda-0f7217fb33a7","Type":"ContainerStarted","Data":"ffe1ef3fcf4a136eaf581dd57cca562534b159dd06f88926529c192fc00614a6"} Mar 14 09:56:03 crc kubenswrapper[4869]: I0314 09:56:03.043073 4869 generic.go:334] "Generic (PLEG): container finished" podID="313d2a07-63ff-47b7-9fda-0f7217fb33a7" containerID="849d18ee71153524147d205d968e16d57d57bfc865aca7bed0e5ffb3b6a46044" exitCode=0 Mar 14 09:56:03 crc kubenswrapper[4869]: I0314 09:56:03.043156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-mcscc" event={"ID":"313d2a07-63ff-47b7-9fda-0f7217fb33a7","Type":"ContainerDied","Data":"849d18ee71153524147d205d968e16d57d57bfc865aca7bed0e5ffb3b6a46044"} Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.406683 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.435700 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.494700 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.525728 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfmpc\" (UniqueName: \"kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc\") pod \"313d2a07-63ff-47b7-9fda-0f7217fb33a7\" (UID: \"313d2a07-63ff-47b7-9fda-0f7217fb33a7\") " Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.532196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc" (OuterVolumeSpecName: "kube-api-access-bfmpc") pod "313d2a07-63ff-47b7-9fda-0f7217fb33a7" (UID: "313d2a07-63ff-47b7-9fda-0f7217fb33a7"). InnerVolumeSpecName "kube-api-access-bfmpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.629123 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfmpc\" (UniqueName: \"kubernetes.io/projected/313d2a07-63ff-47b7-9fda-0f7217fb33a7-kube-api-access-bfmpc\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:04 crc kubenswrapper[4869]: I0314 09:56:04.680101 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.069202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558036-mcscc" event={"ID":"313d2a07-63ff-47b7-9fda-0f7217fb33a7","Type":"ContainerDied","Data":"ffe1ef3fcf4a136eaf581dd57cca562534b159dd06f88926529c192fc00614a6"} Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.069625 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe1ef3fcf4a136eaf581dd57cca562534b159dd06f88926529c192fc00614a6" Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.069261 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558036-mcscc" Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.489059 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-dgn88"] Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.502097 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558030-dgn88"] Mar 14 09:56:05 crc kubenswrapper[4869]: I0314 09:56:05.717492 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf0393f-6ef4-4bf9-8d30-33da4902cf9f" path="/var/lib/kubelet/pods/7cf0393f-6ef4-4bf9-8d30-33da4902cf9f/volumes" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.081253 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zl4mt" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="registry-server" containerID="cri-o://e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92" gracePeriod=2 Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.594683 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.776495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lrpr\" (UniqueName: \"kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr\") pod \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.776719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content\") pod \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.776904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities\") pod \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\" (UID: \"5f6be2a8-5d5c-4903-8e30-0dc64017bb72\") " Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.777823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities" (OuterVolumeSpecName: "utilities") pod "5f6be2a8-5d5c-4903-8e30-0dc64017bb72" (UID: "5f6be2a8-5d5c-4903-8e30-0dc64017bb72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.784051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr" (OuterVolumeSpecName: "kube-api-access-4lrpr") pod "5f6be2a8-5d5c-4903-8e30-0dc64017bb72" (UID: "5f6be2a8-5d5c-4903-8e30-0dc64017bb72"). InnerVolumeSpecName "kube-api-access-4lrpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.880256 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lrpr\" (UniqueName: \"kubernetes.io/projected/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-kube-api-access-4lrpr\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.880291 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.951932 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f6be2a8-5d5c-4903-8e30-0dc64017bb72" (UID: "5f6be2a8-5d5c-4903-8e30-0dc64017bb72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:56:06 crc kubenswrapper[4869]: I0314 09:56:06.982825 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6be2a8-5d5c-4903-8e30-0dc64017bb72-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.094339 4869 generic.go:334] "Generic (PLEG): container finished" podID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerID="e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92" exitCode=0 Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.094501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerDied","Data":"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92"} Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.094792 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl4mt" event={"ID":"5f6be2a8-5d5c-4903-8e30-0dc64017bb72","Type":"ContainerDied","Data":"44e5aef7ac0724be66c7db22cf52d99262b6f7a9e77698b20cdf59ba516180c9"} Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.094822 4869 scope.go:117] "RemoveContainer" containerID="e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.094570 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl4mt" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.132889 4869 scope.go:117] "RemoveContainer" containerID="c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.136967 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.149159 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zl4mt"] Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.166257 4869 scope.go:117] "RemoveContainer" containerID="5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.176608 4869 scope.go:117] "RemoveContainer" containerID="7d053bb427ad60ee8541e2e8e4447d4e5c3cb6e8aead438f03964f69bf13021e" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.203639 4869 scope.go:117] "RemoveContainer" containerID="e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92" Mar 14 09:56:07 crc kubenswrapper[4869]: E0314 09:56:07.204110 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92\": container with ID starting with e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92 not found: ID does not exist" containerID="e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.204168 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92"} err="failed to get container status \"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92\": rpc error: code = NotFound desc = could not find container \"e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92\": container with ID starting with e785b63b7f8db80c4cbd9078d923b7c2c5b8c21bcda51fde97a607b588810e92 not found: ID does not exist" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.204201 4869 scope.go:117] "RemoveContainer" containerID="c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965" Mar 14 09:56:07 crc kubenswrapper[4869]: E0314 09:56:07.204481 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965\": container with ID starting with c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965 not found: ID does not exist" containerID="c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.204503 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965"} err="failed to get container status \"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965\": rpc error: code = NotFound desc = could not find container \"c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965\": container with ID starting with c55e3098688e323687f5fe35c4cb79f2cdc3cd67bd322470abd8a8012b65d965 not found: ID does not exist" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.204537 4869 scope.go:117] "RemoveContainer" containerID="5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362" Mar 14 09:56:07 crc kubenswrapper[4869]: E0314 09:56:07.204775 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362\": container with ID starting with 5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362 not found: ID does not exist" containerID="5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.204833 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362"} err="failed to get container status \"5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362\": rpc error: code = NotFound desc = could not find container \"5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362\": container with ID starting with 5cb815d5d24b48165b0ed63f3c26e9df842b1e54cb98bb8d1a360c64c4018362 not found: ID does not exist" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.711859 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:56:07 crc kubenswrapper[4869]: E0314 09:56:07.712629 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:56:07 crc kubenswrapper[4869]: I0314 09:56:07.726166 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" path="/var/lib/kubelet/pods/5f6be2a8-5d5c-4903-8e30-0dc64017bb72/volumes" Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.605913 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.606308 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.606412 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.607681 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.607826 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a" gracePeriod=600 Mar 14 09:56:09 crc kubenswrapper[4869]: I0314 09:56:09.704358 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:56:09 crc kubenswrapper[4869]: E0314 09:56:09.704811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:56:10 crc kubenswrapper[4869]: I0314 09:56:10.141171 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a" exitCode=0 Mar 14 09:56:10 crc kubenswrapper[4869]: I0314 09:56:10.141273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a"} Mar 14 09:56:10 crc kubenswrapper[4869]: I0314 09:56:10.141636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738"} Mar 14 09:56:10 crc kubenswrapper[4869]: I0314 09:56:10.141664 4869 scope.go:117] "RemoveContainer" containerID="fb96f24f8c3efd26a8dff0e1c5abbb5b026f1e93d6f6e03e2dd5149d8138730e" Mar 14 09:56:21 crc kubenswrapper[4869]: I0314 09:56:21.704433 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:56:21 crc kubenswrapper[4869]: E0314 09:56:21.705429 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:56:23 crc kubenswrapper[4869]: I0314 09:56:23.704775 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:56:23 crc kubenswrapper[4869]: E0314 09:56:23.705044 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:56:34 crc kubenswrapper[4869]: I0314 09:56:34.703883 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:56:34 crc kubenswrapper[4869]: E0314 09:56:34.704788 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:56:38 crc kubenswrapper[4869]: I0314 09:56:38.703801 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:56:38 crc kubenswrapper[4869]: E0314 09:56:38.704591 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:56:49 crc kubenswrapper[4869]: I0314 09:56:49.704316 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:56:49 crc kubenswrapper[4869]: E0314 09:56:49.705243 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:56:51 crc kubenswrapper[4869]: I0314 09:56:51.704254 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:56:51 crc kubenswrapper[4869]: E0314 09:56:51.704802 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:57:02 crc kubenswrapper[4869]: I0314 09:57:02.704839 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:57:02 crc kubenswrapper[4869]: E0314 09:57:02.705768 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:57:06 crc kubenswrapper[4869]: I0314 09:57:06.703535 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:57:06 crc kubenswrapper[4869]: E0314 09:57:06.704288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:57:13 crc kubenswrapper[4869]: I0314 09:57:13.704653 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:57:13 crc kubenswrapper[4869]: E0314 09:57:13.705539 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:57:20 crc kubenswrapper[4869]: I0314 09:57:20.705984 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:57:20 crc kubenswrapper[4869]: E0314 09:57:20.706959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:57:28 crc kubenswrapper[4869]: I0314 09:57:28.703676 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:57:28 crc kubenswrapper[4869]: E0314 09:57:28.704359 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:57:33 crc kubenswrapper[4869]: I0314 09:57:33.704825 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:57:33 crc kubenswrapper[4869]: E0314 09:57:33.716628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:57:40 crc kubenswrapper[4869]: I0314 09:57:40.704272 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:57:40 crc kubenswrapper[4869]: E0314 09:57:40.705288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:57:46 crc kubenswrapper[4869]: I0314 09:57:46.704281 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:57:47 crc kubenswrapper[4869]: I0314 09:57:47.108391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2"} Mar 14 09:57:54 crc kubenswrapper[4869]: I0314 09:57:54.539411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:57:54 crc kubenswrapper[4869]: I0314 09:57:54.540176 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:57:54 crc kubenswrapper[4869]: I0314 09:57:54.703725 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:57:54 crc kubenswrapper[4869]: E0314 09:57:54.703964 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:57:55 crc kubenswrapper[4869]: I0314 09:57:55.188205 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" exitCode=1 Mar 14 09:57:55 crc kubenswrapper[4869]: I0314 09:57:55.188268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2"} Mar 14 09:57:55 crc kubenswrapper[4869]: I0314 09:57:55.188320 4869 scope.go:117] "RemoveContainer" containerID="11f9040fd76095988cc4bd5df2dce8d5b46bd2a92eaaea51fc1f88025131a894" Mar 14 09:57:55 crc kubenswrapper[4869]: I0314 09:57:55.189140 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:57:55 crc kubenswrapper[4869]: E0314 09:57:55.189417 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.151876 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558038-dmq4r"] Mar 14 09:58:00 crc kubenswrapper[4869]: E0314 09:58:00.152943 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.152958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4869]: E0314 09:58:00.152971 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d2a07-63ff-47b7-9fda-0f7217fb33a7" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.152977 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d2a07-63ff-47b7-9fda-0f7217fb33a7" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4869]: E0314 09:58:00.152995 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.153001 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="extract-content" Mar 14 09:58:00 crc kubenswrapper[4869]: E0314 09:58:00.153035 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.153042 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="extract-utilities" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.153226 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6be2a8-5d5c-4903-8e30-0dc64017bb72" containerName="registry-server" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.153241 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="313d2a07-63ff-47b7-9fda-0f7217fb33a7" containerName="oc" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.154022 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.161456 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-dmq4r"] Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.162099 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.162314 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.162474 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.310903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r79dc\" (UniqueName: \"kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc\") pod \"auto-csr-approver-29558038-dmq4r\" (UID: \"578c26ed-02c8-47f5-9e36-151cf98c6537\") " pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.413388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r79dc\" (UniqueName: \"kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc\") pod \"auto-csr-approver-29558038-dmq4r\" (UID: \"578c26ed-02c8-47f5-9e36-151cf98c6537\") " pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.435951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r79dc\" (UniqueName: \"kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc\") pod \"auto-csr-approver-29558038-dmq4r\" (UID: \"578c26ed-02c8-47f5-9e36-151cf98c6537\") " pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.476037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:00 crc kubenswrapper[4869]: I0314 09:58:00.969055 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-dmq4r"] Mar 14 09:58:01 crc kubenswrapper[4869]: I0314 09:58:01.249416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" event={"ID":"578c26ed-02c8-47f5-9e36-151cf98c6537","Type":"ContainerStarted","Data":"8e2b7269c19851059b1b52eb96892959d96f67cacd19739cae169d0110665611"} Mar 14 09:58:02 crc kubenswrapper[4869]: I0314 09:58:02.259678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" event={"ID":"578c26ed-02c8-47f5-9e36-151cf98c6537","Type":"ContainerStarted","Data":"ad071eafd7933ab3e925a7f63ff49a7a8b58510926ad5766e20f8377b85e5c4a"} Mar 14 09:58:02 crc kubenswrapper[4869]: I0314 09:58:02.284775 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" podStartSLOduration=1.498723703 podStartE2EDuration="2.284751521s" podCreationTimestamp="2026-03-14 09:58:00 +0000 UTC" firstStartedPulling="2026-03-14 09:58:00.962862357 +0000 UTC m=+3633.935144410" lastFinishedPulling="2026-03-14 09:58:01.748890175 +0000 UTC m=+3634.721172228" observedRunningTime="2026-03-14 09:58:02.275412332 +0000 UTC m=+3635.247694405" watchObservedRunningTime="2026-03-14 09:58:02.284751521 +0000 UTC m=+3635.257033574" Mar 14 09:58:03 crc kubenswrapper[4869]: I0314 09:58:03.270364 4869 generic.go:334] "Generic (PLEG): container finished" podID="578c26ed-02c8-47f5-9e36-151cf98c6537" containerID="ad071eafd7933ab3e925a7f63ff49a7a8b58510926ad5766e20f8377b85e5c4a" exitCode=0 Mar 14 09:58:03 crc kubenswrapper[4869]: I0314 09:58:03.270460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" event={"ID":"578c26ed-02c8-47f5-9e36-151cf98c6537","Type":"ContainerDied","Data":"ad071eafd7933ab3e925a7f63ff49a7a8b58510926ad5766e20f8377b85e5c4a"} Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.539311 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.539359 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.540040 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:58:04 crc kubenswrapper[4869]: E0314 09:58:04.540302 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.698654 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.803874 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r79dc\" (UniqueName: \"kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc\") pod \"578c26ed-02c8-47f5-9e36-151cf98c6537\" (UID: \"578c26ed-02c8-47f5-9e36-151cf98c6537\") " Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.808997 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc" (OuterVolumeSpecName: "kube-api-access-r79dc") pod "578c26ed-02c8-47f5-9e36-151cf98c6537" (UID: "578c26ed-02c8-47f5-9e36-151cf98c6537"). InnerVolumeSpecName "kube-api-access-r79dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:58:04 crc kubenswrapper[4869]: I0314 09:58:04.907084 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r79dc\" (UniqueName: \"kubernetes.io/projected/578c26ed-02c8-47f5-9e36-151cf98c6537-kube-api-access-r79dc\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.290411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" event={"ID":"578c26ed-02c8-47f5-9e36-151cf98c6537","Type":"ContainerDied","Data":"8e2b7269c19851059b1b52eb96892959d96f67cacd19739cae169d0110665611"} Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.290457 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2b7269c19851059b1b52eb96892959d96f67cacd19739cae169d0110665611" Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.290526 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558038-dmq4r" Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.359234 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-85cfb"] Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.366785 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558032-85cfb"] Mar 14 09:58:05 crc kubenswrapper[4869]: I0314 09:58:05.718256 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="451f8d82-2d10-4e49-9d47-b45773325a53" path="/var/lib/kubelet/pods/451f8d82-2d10-4e49-9d47-b45773325a53/volumes" Mar 14 09:58:06 crc kubenswrapper[4869]: I0314 09:58:06.704324 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:58:06 crc kubenswrapper[4869]: E0314 09:58:06.704585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:58:07 crc kubenswrapper[4869]: I0314 09:58:07.318423 4869 scope.go:117] "RemoveContainer" containerID="c04ae7a1cd6f63394b24a30f7ec3ff3b146f8fef8cd59e26f3fd22c8d87b30ca" Mar 14 09:58:09 crc kubenswrapper[4869]: I0314 09:58:09.605244 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:58:09 crc kubenswrapper[4869]: I0314 09:58:09.605580 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:58:15 crc kubenswrapper[4869]: I0314 09:58:15.705061 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:58:15 crc kubenswrapper[4869]: E0314 09:58:15.705983 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:19 crc kubenswrapper[4869]: I0314 09:58:19.704426 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:58:20 crc kubenswrapper[4869]: I0314 09:58:20.456896 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327"} Mar 14 09:58:24 crc kubenswrapper[4869]: I0314 09:58:24.404827 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:58:24 crc kubenswrapper[4869]: I0314 09:58:24.406586 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:58:27 crc kubenswrapper[4869]: I0314 09:58:27.971426 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:27 crc kubenswrapper[4869]: E0314 09:58:27.972579 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578c26ed-02c8-47f5-9e36-151cf98c6537" containerName="oc" Mar 14 09:58:27 crc kubenswrapper[4869]: I0314 09:58:27.972597 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="578c26ed-02c8-47f5-9e36-151cf98c6537" containerName="oc" Mar 14 09:58:27 crc kubenswrapper[4869]: I0314 09:58:27.972847 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="578c26ed-02c8-47f5-9e36-151cf98c6537" containerName="oc" Mar 14 09:58:27 crc kubenswrapper[4869]: I0314 09:58:27.974608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:27 crc kubenswrapper[4869]: I0314 09:58:27.986367 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.052146 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.052572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzj76\" (UniqueName: \"kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.052761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.154440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.154548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.154639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzj76\" (UniqueName: \"kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.155145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.155488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.178473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzj76\" (UniqueName: \"kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76\") pod \"community-operators-ff4b8\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.297873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.577770 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" exitCode=1 Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.577875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327"} Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.578719 4869 scope.go:117] "RemoveContainer" containerID="9b529627ab7ffd046cc23a8aea0ba6cc2b1dea6b50089e2eaa624605be8e587e" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.594488 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:58:28 crc kubenswrapper[4869]: E0314 09:58:28.595099 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.719750 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:58:28 crc kubenswrapper[4869]: E0314 09:58:28.719960 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.874096 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.967140 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.969574 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:28 crc kubenswrapper[4869]: I0314 09:58:28.980531 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.126015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.126267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2vp\" (UniqueName: \"kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.126650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.228415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.228569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.228630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt2vp\" (UniqueName: \"kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.229391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.229462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.252018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt2vp\" (UniqueName: \"kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp\") pod \"redhat-marketplace-7fjtc\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.316175 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.599323 4869 generic.go:334] "Generic (PLEG): container finished" podID="06e50f7b-a67f-4686-9057-945f0391ca74" containerID="5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71" exitCode=0 Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.599653 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerDied","Data":"5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71"} Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.599683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerStarted","Data":"7310b7d6dc6d0df096f25c23a0e02ee95dac48b8f128a5d3fee3c28ac933eeff"} Mar 14 09:58:29 crc kubenswrapper[4869]: W0314 09:58:29.863790 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2425d3f3_0b0c_4209_ba7c_00b53e686d55.slice/crio-376c8e85cae81dbbafd2f50920f1df0bef9721cc4766dc5001b97bbc24292f1b WatchSource:0}: Error finding container 376c8e85cae81dbbafd2f50920f1df0bef9721cc4766dc5001b97bbc24292f1b: Status 404 returned error can't find the container with id 376c8e85cae81dbbafd2f50920f1df0bef9721cc4766dc5001b97bbc24292f1b Mar 14 09:58:29 crc kubenswrapper[4869]: I0314 09:58:29.869478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:30 crc kubenswrapper[4869]: I0314 09:58:30.612108 4869 generic.go:334] "Generic (PLEG): container finished" podID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerID="8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b" exitCode=0 Mar 14 09:58:30 crc kubenswrapper[4869]: I0314 09:58:30.612212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerDied","Data":"8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b"} Mar 14 09:58:30 crc kubenswrapper[4869]: I0314 09:58:30.612556 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerStarted","Data":"376c8e85cae81dbbafd2f50920f1df0bef9721cc4766dc5001b97bbc24292f1b"} Mar 14 09:58:30 crc kubenswrapper[4869]: I0314 09:58:30.619376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerStarted","Data":"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6"} Mar 14 09:58:31 crc kubenswrapper[4869]: I0314 09:58:31.634280 4869 generic.go:334] "Generic (PLEG): container finished" podID="06e50f7b-a67f-4686-9057-945f0391ca74" containerID="1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6" exitCode=0 Mar 14 09:58:31 crc kubenswrapper[4869]: I0314 09:58:31.634423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerDied","Data":"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6"} Mar 14 09:58:31 crc kubenswrapper[4869]: I0314 09:58:31.644497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerStarted","Data":"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5"} Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.662691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerStarted","Data":"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4"} Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.675070 4869 generic.go:334] "Generic (PLEG): container finished" podID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerID="3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5" exitCode=0 Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.675120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerDied","Data":"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5"} Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.675151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerStarted","Data":"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a"} Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.691167 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ff4b8" podStartSLOduration=3.221511145 podStartE2EDuration="5.691140706s" podCreationTimestamp="2026-03-14 09:58:27 +0000 UTC" firstStartedPulling="2026-03-14 09:58:29.603742848 +0000 UTC m=+3662.576024901" lastFinishedPulling="2026-03-14 09:58:32.073372409 +0000 UTC m=+3665.045654462" observedRunningTime="2026-03-14 09:58:32.684430551 +0000 UTC m=+3665.656712624" watchObservedRunningTime="2026-03-14 09:58:32.691140706 +0000 UTC m=+3665.663422759" Mar 14 09:58:32 crc kubenswrapper[4869]: I0314 09:58:32.714781 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7fjtc" podStartSLOduration=3.243794193 podStartE2EDuration="4.714760556s" podCreationTimestamp="2026-03-14 09:58:28 +0000 UTC" firstStartedPulling="2026-03-14 09:58:30.614004801 +0000 UTC m=+3663.586286844" lastFinishedPulling="2026-03-14 09:58:32.084971154 +0000 UTC m=+3665.057253207" observedRunningTime="2026-03-14 09:58:32.704983466 +0000 UTC m=+3665.677265539" watchObservedRunningTime="2026-03-14 09:58:32.714760556 +0000 UTC m=+3665.687042609" Mar 14 09:58:34 crc kubenswrapper[4869]: I0314 09:58:34.405146 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:58:34 crc kubenswrapper[4869]: I0314 09:58:34.405517 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 09:58:34 crc kubenswrapper[4869]: I0314 09:58:34.406313 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:58:34 crc kubenswrapper[4869]: E0314 09:58:34.406623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:58:38 crc kubenswrapper[4869]: I0314 09:58:38.298125 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:38 crc kubenswrapper[4869]: I0314 09:58:38.298532 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:38 crc kubenswrapper[4869]: I0314 09:58:38.405528 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:38 crc kubenswrapper[4869]: I0314 09:58:38.805244 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:38 crc kubenswrapper[4869]: I0314 09:58:38.871036 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.317063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.317125 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.391887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.604967 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.605044 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.704771 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:58:39 crc kubenswrapper[4869]: E0314 09:58:39.705068 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:39 crc kubenswrapper[4869]: I0314 09:58:39.835396 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:40 crc kubenswrapper[4869]: I0314 09:58:40.779883 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ff4b8" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="registry-server" containerID="cri-o://6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4" gracePeriod=2 Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.072448 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.760939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.794857 4869 generic.go:334] "Generic (PLEG): container finished" podID="06e50f7b-a67f-4686-9057-945f0391ca74" containerID="6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4" exitCode=0 Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.795131 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7fjtc" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="registry-server" containerID="cri-o://ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a" gracePeriod=2 Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.795492 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ff4b8" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.795715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerDied","Data":"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4"} Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.795796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ff4b8" event={"ID":"06e50f7b-a67f-4686-9057-945f0391ca74","Type":"ContainerDied","Data":"7310b7d6dc6d0df096f25c23a0e02ee95dac48b8f128a5d3fee3c28ac933eeff"} Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.795831 4869 scope.go:117] "RemoveContainer" containerID="6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.826913 4869 scope.go:117] "RemoveContainer" containerID="1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.908860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content\") pod \"06e50f7b-a67f-4686-9057-945f0391ca74\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.908955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzj76\" (UniqueName: \"kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76\") pod \"06e50f7b-a67f-4686-9057-945f0391ca74\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.909151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities\") pod \"06e50f7b-a67f-4686-9057-945f0391ca74\" (UID: \"06e50f7b-a67f-4686-9057-945f0391ca74\") " Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.910188 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities" (OuterVolumeSpecName: "utilities") pod "06e50f7b-a67f-4686-9057-945f0391ca74" (UID: "06e50f7b-a67f-4686-9057-945f0391ca74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.914739 4869 scope.go:117] "RemoveContainer" containerID="5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.920803 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76" (OuterVolumeSpecName: "kube-api-access-lzj76") pod "06e50f7b-a67f-4686-9057-945f0391ca74" (UID: "06e50f7b-a67f-4686-9057-945f0391ca74"). InnerVolumeSpecName "kube-api-access-lzj76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:58:41 crc kubenswrapper[4869]: I0314 09:58:41.969870 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06e50f7b-a67f-4686-9057-945f0391ca74" (UID: "06e50f7b-a67f-4686-9057-945f0391ca74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.013833 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.014205 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e50f7b-a67f-4686-9057-945f0391ca74-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.014221 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzj76\" (UniqueName: \"kubernetes.io/projected/06e50f7b-a67f-4686-9057-945f0391ca74-kube-api-access-lzj76\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.049550 4869 scope.go:117] "RemoveContainer" containerID="6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.050292 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4\": container with ID starting with 6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4 not found: ID does not exist" containerID="6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.050340 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4"} err="failed to get container status \"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4\": rpc error: code = NotFound desc = could not find container \"6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4\": container with ID starting with 6011509ae7296e466ac5d74df411d251783cd4a289578af5d02b267c679a36b4 not found: ID does not exist" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.050369 4869 scope.go:117] "RemoveContainer" containerID="1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.050854 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6\": container with ID starting with 1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6 not found: ID does not exist" containerID="1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.050898 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6"} err="failed to get container status \"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6\": rpc error: code = NotFound desc = could not find container \"1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6\": container with ID starting with 1ac9ceb06d4c599695d80f8e2154baf7f4b99f72235b7c3275f509af12fde5d6 not found: ID does not exist" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.050935 4869 scope.go:117] "RemoveContainer" containerID="5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.051528 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71\": container with ID starting with 5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71 not found: ID does not exist" containerID="5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.051571 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71"} err="failed to get container status \"5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71\": rpc error: code = NotFound desc = could not find container \"5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71\": container with ID starting with 5407c7ea4cd506ae2560fdda15bc860913604921938a793284e2ecee65ae5e71 not found: ID does not exist" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.135546 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.144649 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ff4b8"] Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.347363 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.422237 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content\") pod \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.422382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities\") pod \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.422730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt2vp\" (UniqueName: \"kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp\") pod \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\" (UID: \"2425d3f3-0b0c-4209-ba7c-00b53e686d55\") " Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.423639 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities" (OuterVolumeSpecName: "utilities") pod "2425d3f3-0b0c-4209-ba7c-00b53e686d55" (UID: "2425d3f3-0b0c-4209-ba7c-00b53e686d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.428317 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp" (OuterVolumeSpecName: "kube-api-access-mt2vp") pod "2425d3f3-0b0c-4209-ba7c-00b53e686d55" (UID: "2425d3f3-0b0c-4209-ba7c-00b53e686d55"). InnerVolumeSpecName "kube-api-access-mt2vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.453201 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2425d3f3-0b0c-4209-ba7c-00b53e686d55" (UID: "2425d3f3-0b0c-4209-ba7c-00b53e686d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.525563 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.525594 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2425d3f3-0b0c-4209-ba7c-00b53e686d55-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.525607 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt2vp\" (UniqueName: \"kubernetes.io/projected/2425d3f3-0b0c-4209-ba7c-00b53e686d55-kube-api-access-mt2vp\") on node \"crc\" DevicePath \"\"" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.806636 4869 generic.go:334] "Generic (PLEG): container finished" podID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerID="ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a" exitCode=0 Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.806688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerDied","Data":"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a"} Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.806893 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7fjtc" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.807185 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7fjtc" event={"ID":"2425d3f3-0b0c-4209-ba7c-00b53e686d55","Type":"ContainerDied","Data":"376c8e85cae81dbbafd2f50920f1df0bef9721cc4766dc5001b97bbc24292f1b"} Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.807215 4869 scope.go:117] "RemoveContainer" containerID="ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.830748 4869 scope.go:117] "RemoveContainer" containerID="3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.862892 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.876420 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7fjtc"] Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.876913 4869 scope.go:117] "RemoveContainer" containerID="8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.896466 4869 scope.go:117] "RemoveContainer" containerID="ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.897073 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a\": container with ID starting with ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a not found: ID does not exist" containerID="ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.897155 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a"} err="failed to get container status \"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a\": rpc error: code = NotFound desc = could not find container \"ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a\": container with ID starting with ff3f2124770b1a8c17d6e01e8b5b8abb4edc8098276b3d440f59fc5ef7d5188a not found: ID does not exist" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.897283 4869 scope.go:117] "RemoveContainer" containerID="3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.897948 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5\": container with ID starting with 3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5 not found: ID does not exist" containerID="3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.897999 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5"} err="failed to get container status \"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5\": rpc error: code = NotFound desc = could not find container \"3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5\": container with ID starting with 3611187d5666da8b2e8d72ee31763117b796da4231ff79a5e20058f023b7c4f5 not found: ID does not exist" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.898032 4869 scope.go:117] "RemoveContainer" containerID="8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b" Mar 14 09:58:42 crc kubenswrapper[4869]: E0314 09:58:42.898305 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b\": container with ID starting with 8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b not found: ID does not exist" containerID="8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b" Mar 14 09:58:42 crc kubenswrapper[4869]: I0314 09:58:42.898338 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b"} err="failed to get container status \"8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b\": rpc error: code = NotFound desc = could not find container \"8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b\": container with ID starting with 8ec20d0e5f7336996d3f67c4be601b0a145e5b6d61247a16002b27b1e1bff52b not found: ID does not exist" Mar 14 09:58:43 crc kubenswrapper[4869]: I0314 09:58:43.723576 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" path="/var/lib/kubelet/pods/06e50f7b-a67f-4686-9057-945f0391ca74/volumes" Mar 14 09:58:43 crc kubenswrapper[4869]: I0314 09:58:43.724872 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" path="/var/lib/kubelet/pods/2425d3f3-0b0c-4209-ba7c-00b53e686d55/volumes" Mar 14 09:58:47 crc kubenswrapper[4869]: I0314 09:58:47.709386 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:58:47 crc kubenswrapper[4869]: E0314 09:58:47.709933 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:58:51 crc kubenswrapper[4869]: I0314 09:58:51.703485 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:58:51 crc kubenswrapper[4869]: E0314 09:58:51.704210 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:58:59 crc kubenswrapper[4869]: I0314 09:58:59.611056 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:58:59 crc kubenswrapper[4869]: E0314 09:58:59.612418 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:59:05 crc kubenswrapper[4869]: I0314 09:59:05.704326 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:59:05 crc kubenswrapper[4869]: E0314 09:59:05.705224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:59:09 crc kubenswrapper[4869]: I0314 09:59:09.605031 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 09:59:09 crc kubenswrapper[4869]: I0314 09:59:09.605922 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 09:59:09 crc kubenswrapper[4869]: I0314 09:59:09.605971 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 09:59:09 crc kubenswrapper[4869]: I0314 09:59:09.606826 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 09:59:09 crc kubenswrapper[4869]: I0314 09:59:09.606873 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" gracePeriod=600 Mar 14 09:59:09 crc kubenswrapper[4869]: E0314 09:59:09.738851 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:59:10 crc kubenswrapper[4869]: I0314 09:59:10.092274 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" exitCode=0 Mar 14 09:59:10 crc kubenswrapper[4869]: I0314 09:59:10.092746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738"} Mar 14 09:59:10 crc kubenswrapper[4869]: I0314 09:59:10.092789 4869 scope.go:117] "RemoveContainer" containerID="67876cd828f4b777d28fcf869a354af256d2cf26d5306f0de4c3d4644fecdd2a" Mar 14 09:59:10 crc kubenswrapper[4869]: I0314 09:59:10.093747 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 09:59:10 crc kubenswrapper[4869]: E0314 09:59:10.094078 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:59:10 crc kubenswrapper[4869]: I0314 09:59:10.704064 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:59:10 crc kubenswrapper[4869]: E0314 09:59:10.704389 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:59:18 crc kubenswrapper[4869]: I0314 09:59:18.704552 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:59:18 crc kubenswrapper[4869]: E0314 09:59:18.705299 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:59:22 crc kubenswrapper[4869]: I0314 09:59:22.704156 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 09:59:22 crc kubenswrapper[4869]: E0314 09:59:22.705307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:59:23 crc kubenswrapper[4869]: I0314 09:59:23.704704 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:59:23 crc kubenswrapper[4869]: E0314 09:59:23.705083 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:59:31 crc kubenswrapper[4869]: I0314 09:59:31.704482 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:59:31 crc kubenswrapper[4869]: E0314 09:59:31.705432 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:59:34 crc kubenswrapper[4869]: I0314 09:59:34.716779 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:59:34 crc kubenswrapper[4869]: E0314 09:59:34.717497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:59:35 crc kubenswrapper[4869]: I0314 09:59:35.704408 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 09:59:35 crc kubenswrapper[4869]: E0314 09:59:35.705008 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:59:43 crc kubenswrapper[4869]: I0314 09:59:43.704497 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:59:43 crc kubenswrapper[4869]: E0314 09:59:43.706593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:59:48 crc kubenswrapper[4869]: I0314 09:59:48.704297 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:59:48 crc kubenswrapper[4869]: E0314 09:59:48.707028 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 09:59:49 crc kubenswrapper[4869]: I0314 09:59:49.704219 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 09:59:49 crc kubenswrapper[4869]: E0314 09:59:49.704475 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 09:59:54 crc kubenswrapper[4869]: I0314 09:59:54.703841 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 09:59:54 crc kubenswrapper[4869]: E0314 09:59:54.704731 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 09:59:59 crc kubenswrapper[4869]: I0314 09:59:59.705221 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 09:59:59 crc kubenswrapper[4869]: E0314 09:59:59.706205 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154176 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558040-8dwl5"] Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154670 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154689 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154709 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="extract-content" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154716 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="extract-content" Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154732 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154739 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154753 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="extract-content" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="extract-content" Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154775 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="extract-utilities" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154781 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="extract-utilities" Mar 14 10:00:00 crc kubenswrapper[4869]: E0314 10:00:00.154801 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="extract-utilities" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.154807 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="extract-utilities" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.155036 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425d3f3-0b0c-4209-ba7c-00b53e686d55" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.155063 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e50f7b-a67f-4686-9057-945f0391ca74" containerName="registry-server" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.155780 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.160921 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.161214 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.163107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.169303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-8dwl5"] Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.245742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqnh\" (UniqueName: \"kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh\") pod \"auto-csr-approver-29558040-8dwl5\" (UID: \"31cbd2d2-f710-4534-99d9-8263ee4cf905\") " pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.259837 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582"] Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.261671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.264095 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.264274 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.273241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582"] Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.348005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.348275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvqnh\" (UniqueName: \"kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh\") pod \"auto-csr-approver-29558040-8dwl5\" (UID: \"31cbd2d2-f710-4534-99d9-8263ee4cf905\") " pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.348337 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzzh\" (UniqueName: \"kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.348473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.367311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvqnh\" (UniqueName: \"kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh\") pod \"auto-csr-approver-29558040-8dwl5\" (UID: \"31cbd2d2-f710-4534-99d9-8263ee4cf905\") " pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.450214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.450577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.450745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmzzh\" (UniqueName: \"kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.451684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.454398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.472325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmzzh\" (UniqueName: \"kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh\") pod \"collect-profiles-29558040-5b582\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.505253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.580182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:00 crc kubenswrapper[4869]: W0314 10:00:00.957097 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31cbd2d2_f710_4534_99d9_8263ee4cf905.slice/crio-f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4 WatchSource:0}: Error finding container f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4: Status 404 returned error can't find the container with id f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4 Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.957222 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-8dwl5"] Mar 14 10:00:00 crc kubenswrapper[4869]: I0314 10:00:00.959612 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:00:01 crc kubenswrapper[4869]: I0314 10:00:01.069168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582"] Mar 14 10:00:01 crc kubenswrapper[4869]: W0314 10:00:01.073783 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfae88d84_537a_4c3c_8070_ba3c5008c0c7.slice/crio-920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c WatchSource:0}: Error finding container 920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c: Status 404 returned error can't find the container with id 920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c Mar 14 10:00:01 crc kubenswrapper[4869]: I0314 10:00:01.601684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" event={"ID":"31cbd2d2-f710-4534-99d9-8263ee4cf905","Type":"ContainerStarted","Data":"f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4"} Mar 14 10:00:01 crc kubenswrapper[4869]: I0314 10:00:01.603619 4869 generic.go:334] "Generic (PLEG): container finished" podID="fae88d84-537a-4c3c-8070-ba3c5008c0c7" containerID="277eea0935ec6d542b16f35fa27321577605fab8b17ef9af240070e13660de95" exitCode=0 Mar 14 10:00:01 crc kubenswrapper[4869]: I0314 10:00:01.603646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" event={"ID":"fae88d84-537a-4c3c-8070-ba3c5008c0c7","Type":"ContainerDied","Data":"277eea0935ec6d542b16f35fa27321577605fab8b17ef9af240070e13660de95"} Mar 14 10:00:01 crc kubenswrapper[4869]: I0314 10:00:01.603664 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" event={"ID":"fae88d84-537a-4c3c-8070-ba3c5008c0c7","Type":"ContainerStarted","Data":"920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c"} Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.009812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.110790 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume\") pod \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.110909 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmzzh\" (UniqueName: \"kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh\") pod \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.111027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume\") pod \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\" (UID: \"fae88d84-537a-4c3c-8070-ba3c5008c0c7\") " Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.112071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "fae88d84-537a-4c3c-8070-ba3c5008c0c7" (UID: "fae88d84-537a-4c3c-8070-ba3c5008c0c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.117889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fae88d84-537a-4c3c-8070-ba3c5008c0c7" (UID: "fae88d84-537a-4c3c-8070-ba3c5008c0c7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.122691 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh" (OuterVolumeSpecName: "kube-api-access-gmzzh") pod "fae88d84-537a-4c3c-8070-ba3c5008c0c7" (UID: "fae88d84-537a-4c3c-8070-ba3c5008c0c7"). InnerVolumeSpecName "kube-api-access-gmzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.213899 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fae88d84-537a-4c3c-8070-ba3c5008c0c7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.213953 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmzzh\" (UniqueName: \"kubernetes.io/projected/fae88d84-537a-4c3c-8070-ba3c5008c0c7-kube-api-access-gmzzh\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.213972 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fae88d84-537a-4c3c-8070-ba3c5008c0c7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.622532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" event={"ID":"fae88d84-537a-4c3c-8070-ba3c5008c0c7","Type":"ContainerDied","Data":"920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c"} Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.622577 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920af524c0966d4ce5fd14b5255d22984905d55d9d22337ec8a1721cccb9c07c" Mar 14 10:00:03 crc kubenswrapper[4869]: I0314 10:00:03.622626 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558040-5b582" Mar 14 10:00:04 crc kubenswrapper[4869]: I0314 10:00:04.085422 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl"] Mar 14 10:00:04 crc kubenswrapper[4869]: I0314 10:00:04.094388 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29557995-2nmzl"] Mar 14 10:00:04 crc kubenswrapper[4869]: I0314 10:00:04.704613 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:00:04 crc kubenswrapper[4869]: E0314 10:00:04.704924 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:00:05 crc kubenswrapper[4869]: I0314 10:00:05.704875 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:00:05 crc kubenswrapper[4869]: E0314 10:00:05.705373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:00:05 crc kubenswrapper[4869]: I0314 10:00:05.719932 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7538975c-1363-475d-a191-ec59a5810d40" path="/var/lib/kubelet/pods/7538975c-1363-475d-a191-ec59a5810d40/volumes" Mar 14 10:00:07 crc kubenswrapper[4869]: I0314 10:00:07.452631 4869 scope.go:117] "RemoveContainer" containerID="c98955ae2e1a961fbbd60a60ef4f9d9f4d8b24dfc7d10a7b9f787f1806da6372" Mar 14 10:00:09 crc kubenswrapper[4869]: I0314 10:00:09.691481 4869 generic.go:334] "Generic (PLEG): container finished" podID="31cbd2d2-f710-4534-99d9-8263ee4cf905" containerID="ec7e79cc1a450d522c50bb5a290017559a119d26044937778117a0fd41383ec0" exitCode=0 Mar 14 10:00:09 crc kubenswrapper[4869]: I0314 10:00:09.691566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" event={"ID":"31cbd2d2-f710-4534-99d9-8263ee4cf905","Type":"ContainerDied","Data":"ec7e79cc1a450d522c50bb5a290017559a119d26044937778117a0fd41383ec0"} Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.063763 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.184488 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvqnh\" (UniqueName: \"kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh\") pod \"31cbd2d2-f710-4534-99d9-8263ee4cf905\" (UID: \"31cbd2d2-f710-4534-99d9-8263ee4cf905\") " Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.191913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh" (OuterVolumeSpecName: "kube-api-access-gvqnh") pod "31cbd2d2-f710-4534-99d9-8263ee4cf905" (UID: "31cbd2d2-f710-4534-99d9-8263ee4cf905"). InnerVolumeSpecName "kube-api-access-gvqnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.287012 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvqnh\" (UniqueName: \"kubernetes.io/projected/31cbd2d2-f710-4534-99d9-8263ee4cf905-kube-api-access-gvqnh\") on node \"crc\" DevicePath \"\"" Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.709107 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.714968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558040-8dwl5" event={"ID":"31cbd2d2-f710-4534-99d9-8263ee4cf905","Type":"ContainerDied","Data":"f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4"} Mar 14 10:00:11 crc kubenswrapper[4869]: I0314 10:00:11.715006 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4278d34670787fd06295b712910659b735f955f70b4338ce539262eac7a31a4" Mar 14 10:00:12 crc kubenswrapper[4869]: I0314 10:00:12.121990 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-c6dwk"] Mar 14 10:00:12 crc kubenswrapper[4869]: I0314 10:00:12.132867 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558034-c6dwk"] Mar 14 10:00:12 crc kubenswrapper[4869]: I0314 10:00:12.705076 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:00:12 crc kubenswrapper[4869]: E0314 10:00:12.705311 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:00:13 crc kubenswrapper[4869]: I0314 10:00:13.714306 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6f6c57-8eff-4c88-ad8d-a82f852aeb11" path="/var/lib/kubelet/pods/4c6f6c57-8eff-4c88-ad8d-a82f852aeb11/volumes" Mar 14 10:00:17 crc kubenswrapper[4869]: I0314 10:00:17.710431 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:00:17 crc kubenswrapper[4869]: E0314 10:00:17.711182 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:00:20 crc kubenswrapper[4869]: I0314 10:00:20.704127 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:00:20 crc kubenswrapper[4869]: E0314 10:00:20.704876 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:00:23 crc kubenswrapper[4869]: I0314 10:00:23.704717 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:00:23 crc kubenswrapper[4869]: E0314 10:00:23.705317 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:00:30 crc kubenswrapper[4869]: I0314 10:00:30.703570 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:00:30 crc kubenswrapper[4869]: E0314 10:00:30.704415 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:00:31 crc kubenswrapper[4869]: I0314 10:00:31.704848 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:00:31 crc kubenswrapper[4869]: E0314 10:00:31.705047 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:00:38 crc kubenswrapper[4869]: I0314 10:00:38.704425 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:00:38 crc kubenswrapper[4869]: E0314 10:00:38.705436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:00:42 crc kubenswrapper[4869]: I0314 10:00:42.704473 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:00:42 crc kubenswrapper[4869]: E0314 10:00:42.705030 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:00:42 crc kubenswrapper[4869]: I0314 10:00:42.705039 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:00:42 crc kubenswrapper[4869]: E0314 10:00:42.705247 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:00:51 crc kubenswrapper[4869]: I0314 10:00:51.705645 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:00:51 crc kubenswrapper[4869]: E0314 10:00:51.706991 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:00:54 crc kubenswrapper[4869]: I0314 10:00:54.704826 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:00:54 crc kubenswrapper[4869]: E0314 10:00:54.705942 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:00:55 crc kubenswrapper[4869]: I0314 10:00:55.703768 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:00:55 crc kubenswrapper[4869]: E0314 10:00:55.704018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.173974 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29558041-bpmml"] Mar 14 10:01:00 crc kubenswrapper[4869]: E0314 10:01:00.174906 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae88d84-537a-4c3c-8070-ba3c5008c0c7" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.174920 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae88d84-537a-4c3c-8070-ba3c5008c0c7" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4869]: E0314 10:01:00.174939 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31cbd2d2-f710-4534-99d9-8263ee4cf905" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.174946 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="31cbd2d2-f710-4534-99d9-8263ee4cf905" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.175122 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae88d84-537a-4c3c-8070-ba3c5008c0c7" containerName="collect-profiles" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.175135 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="31cbd2d2-f710-4534-99d9-8263ee4cf905" containerName="oc" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.175912 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.203123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29558041-bpmml"] Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.225015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hg47\" (UniqueName: \"kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.225116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.225159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.225200 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.326988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hg47\" (UniqueName: \"kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.327076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.327117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.327151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.335983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.336772 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.340598 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.344929 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hg47\" (UniqueName: \"kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47\") pod \"keystone-cron-29558041-bpmml\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.504987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:00 crc kubenswrapper[4869]: I0314 10:01:00.981603 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29558041-bpmml"] Mar 14 10:01:01 crc kubenswrapper[4869]: I0314 10:01:01.177539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-bpmml" event={"ID":"1c2a8743-c0f6-4e8b-b47f-157d2b478e00","Type":"ContainerStarted","Data":"f625ceac077b167dea370324e00427ef3df3d75dc09c388795533da76a7eb803"} Mar 14 10:01:02 crc kubenswrapper[4869]: I0314 10:01:02.195545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-bpmml" event={"ID":"1c2a8743-c0f6-4e8b-b47f-157d2b478e00","Type":"ContainerStarted","Data":"a300b2553dcf219b6be445433ac6e2b2bcfa63a60005cae2f5985425799d45a5"} Mar 14 10:01:02 crc kubenswrapper[4869]: I0314 10:01:02.224357 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29558041-bpmml" podStartSLOduration=2.224335436 podStartE2EDuration="2.224335436s" podCreationTimestamp="2026-03-14 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 10:01:02.213286024 +0000 UTC m=+3815.185568127" watchObservedRunningTime="2026-03-14 10:01:02.224335436 +0000 UTC m=+3815.196617519" Mar 14 10:01:02 crc kubenswrapper[4869]: I0314 10:01:02.704774 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:01:02 crc kubenswrapper[4869]: E0314 10:01:02.705363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:01:04 crc kubenswrapper[4869]: I0314 10:01:04.213145 4869 generic.go:334] "Generic (PLEG): container finished" podID="1c2a8743-c0f6-4e8b-b47f-157d2b478e00" containerID="a300b2553dcf219b6be445433ac6e2b2bcfa63a60005cae2f5985425799d45a5" exitCode=0 Mar 14 10:01:04 crc kubenswrapper[4869]: I0314 10:01:04.213221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-bpmml" event={"ID":"1c2a8743-c0f6-4e8b-b47f-157d2b478e00","Type":"ContainerDied","Data":"a300b2553dcf219b6be445433ac6e2b2bcfa63a60005cae2f5985425799d45a5"} Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.545673 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.653343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hg47\" (UniqueName: \"kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47\") pod \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.653451 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data\") pod \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.653536 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys\") pod \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.653701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle\") pod \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\" (UID: \"1c2a8743-c0f6-4e8b-b47f-157d2b478e00\") " Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.659818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47" (OuterVolumeSpecName: "kube-api-access-7hg47") pod "1c2a8743-c0f6-4e8b-b47f-157d2b478e00" (UID: "1c2a8743-c0f6-4e8b-b47f-157d2b478e00"). InnerVolumeSpecName "kube-api-access-7hg47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.662791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c2a8743-c0f6-4e8b-b47f-157d2b478e00" (UID: "1c2a8743-c0f6-4e8b-b47f-157d2b478e00"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.697551 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c2a8743-c0f6-4e8b-b47f-157d2b478e00" (UID: "1c2a8743-c0f6-4e8b-b47f-157d2b478e00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.720960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data" (OuterVolumeSpecName: "config-data") pod "1c2a8743-c0f6-4e8b-b47f-157d2b478e00" (UID: "1c2a8743-c0f6-4e8b-b47f-157d2b478e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.757723 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.757761 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hg47\" (UniqueName: \"kubernetes.io/projected/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-kube-api-access-7hg47\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.757773 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-config-data\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:05 crc kubenswrapper[4869]: I0314 10:01:05.757784 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c2a8743-c0f6-4e8b-b47f-157d2b478e00-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 14 10:01:06 crc kubenswrapper[4869]: I0314 10:01:06.270092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29558041-bpmml" event={"ID":"1c2a8743-c0f6-4e8b-b47f-157d2b478e00","Type":"ContainerDied","Data":"f625ceac077b167dea370324e00427ef3df3d75dc09c388795533da76a7eb803"} Mar 14 10:01:06 crc kubenswrapper[4869]: I0314 10:01:06.270169 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f625ceac077b167dea370324e00427ef3df3d75dc09c388795533da76a7eb803" Mar 14 10:01:06 crc kubenswrapper[4869]: I0314 10:01:06.270394 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29558041-bpmml" Mar 14 10:01:07 crc kubenswrapper[4869]: I0314 10:01:07.521139 4869 scope.go:117] "RemoveContainer" containerID="52ddfbb366d3fca98bd3c7abcc642112201e499a6a5c4590ca18d75bf514ad0a" Mar 14 10:01:09 crc kubenswrapper[4869]: I0314 10:01:09.705746 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:01:09 crc kubenswrapper[4869]: E0314 10:01:09.706299 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:01:10 crc kubenswrapper[4869]: I0314 10:01:10.703460 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:01:10 crc kubenswrapper[4869]: E0314 10:01:10.703838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:01:13 crc kubenswrapper[4869]: I0314 10:01:13.703778 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:01:13 crc kubenswrapper[4869]: E0314 10:01:13.704497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:01:23 crc kubenswrapper[4869]: I0314 10:01:23.704009 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:01:23 crc kubenswrapper[4869]: E0314 10:01:23.704809 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:01:24 crc kubenswrapper[4869]: I0314 10:01:24.713075 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:01:24 crc kubenswrapper[4869]: E0314 10:01:24.713649 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:01:26 crc kubenswrapper[4869]: I0314 10:01:26.704392 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:01:26 crc kubenswrapper[4869]: E0314 10:01:26.705062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:01:35 crc kubenswrapper[4869]: I0314 10:01:35.704803 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:01:35 crc kubenswrapper[4869]: E0314 10:01:35.705862 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:01:35 crc kubenswrapper[4869]: I0314 10:01:35.705868 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:01:35 crc kubenswrapper[4869]: E0314 10:01:35.706086 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:01:41 crc kubenswrapper[4869]: I0314 10:01:41.704625 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:01:41 crc kubenswrapper[4869]: E0314 10:01:41.706318 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:01:48 crc kubenswrapper[4869]: I0314 10:01:48.703639 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:01:48 crc kubenswrapper[4869]: I0314 10:01:48.704036 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:01:48 crc kubenswrapper[4869]: E0314 10:01:48.704147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:01:48 crc kubenswrapper[4869]: E0314 10:01:48.704282 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:01:55 crc kubenswrapper[4869]: I0314 10:01:55.704149 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:01:55 crc kubenswrapper[4869]: E0314 10:01:55.705003 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.144776 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558042-kd54j"] Mar 14 10:02:00 crc kubenswrapper[4869]: E0314 10:02:00.146032 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2a8743-c0f6-4e8b-b47f-157d2b478e00" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.146055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2a8743-c0f6-4e8b-b47f-157d2b478e00" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.146433 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2a8743-c0f6-4e8b-b47f-157d2b478e00" containerName="keystone-cron" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.147553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.150150 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.150522 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.153530 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.154105 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-kd54j"] Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.228369 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxsv\" (UniqueName: \"kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv\") pod \"auto-csr-approver-29558042-kd54j\" (UID: \"5473421f-b4f7-4114-806a-cf3c0237fe1f\") " pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.330375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxsv\" (UniqueName: \"kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv\") pod \"auto-csr-approver-29558042-kd54j\" (UID: \"5473421f-b4f7-4114-806a-cf3c0237fe1f\") " pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.352498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxsv\" (UniqueName: \"kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv\") pod \"auto-csr-approver-29558042-kd54j\" (UID: \"5473421f-b4f7-4114-806a-cf3c0237fe1f\") " pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.480649 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.705165 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:02:00 crc kubenswrapper[4869]: E0314 10:02:00.705633 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:02:00 crc kubenswrapper[4869]: I0314 10:02:00.951065 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-kd54j"] Mar 14 10:02:01 crc kubenswrapper[4869]: I0314 10:02:01.789099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-kd54j" event={"ID":"5473421f-b4f7-4114-806a-cf3c0237fe1f","Type":"ContainerStarted","Data":"aca79ff621a7a85658e681d2fd2f1402563e2820250ee15872f3e01648e03dfb"} Mar 14 10:02:02 crc kubenswrapper[4869]: I0314 10:02:02.798453 4869 generic.go:334] "Generic (PLEG): container finished" podID="5473421f-b4f7-4114-806a-cf3c0237fe1f" containerID="e453a24ea042b8ae141607eac2050585bc5a425e9ae20fa003288148496b2282" exitCode=0 Mar 14 10:02:02 crc kubenswrapper[4869]: I0314 10:02:02.798501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-kd54j" event={"ID":"5473421f-b4f7-4114-806a-cf3c0237fe1f","Type":"ContainerDied","Data":"e453a24ea042b8ae141607eac2050585bc5a425e9ae20fa003288148496b2282"} Mar 14 10:02:03 crc kubenswrapper[4869]: I0314 10:02:03.703794 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:02:03 crc kubenswrapper[4869]: E0314 10:02:03.704333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.181543 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.309405 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcxsv\" (UniqueName: \"kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv\") pod \"5473421f-b4f7-4114-806a-cf3c0237fe1f\" (UID: \"5473421f-b4f7-4114-806a-cf3c0237fe1f\") " Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.315317 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv" (OuterVolumeSpecName: "kube-api-access-zcxsv") pod "5473421f-b4f7-4114-806a-cf3c0237fe1f" (UID: "5473421f-b4f7-4114-806a-cf3c0237fe1f"). InnerVolumeSpecName "kube-api-access-zcxsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.411729 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcxsv\" (UniqueName: \"kubernetes.io/projected/5473421f-b4f7-4114-806a-cf3c0237fe1f-kube-api-access-zcxsv\") on node \"crc\" DevicePath \"\"" Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.830428 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558042-kd54j" event={"ID":"5473421f-b4f7-4114-806a-cf3c0237fe1f","Type":"ContainerDied","Data":"aca79ff621a7a85658e681d2fd2f1402563e2820250ee15872f3e01648e03dfb"} Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.830475 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca79ff621a7a85658e681d2fd2f1402563e2820250ee15872f3e01648e03dfb" Mar 14 10:02:04 crc kubenswrapper[4869]: I0314 10:02:04.830528 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558042-kd54j" Mar 14 10:02:05 crc kubenswrapper[4869]: I0314 10:02:05.274690 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-mcscc"] Mar 14 10:02:05 crc kubenswrapper[4869]: I0314 10:02:05.283455 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558036-mcscc"] Mar 14 10:02:05 crc kubenswrapper[4869]: I0314 10:02:05.718323 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313d2a07-63ff-47b7-9fda-0f7217fb33a7" path="/var/lib/kubelet/pods/313d2a07-63ff-47b7-9fda-0f7217fb33a7/volumes" Mar 14 10:02:07 crc kubenswrapper[4869]: I0314 10:02:07.624709 4869 scope.go:117] "RemoveContainer" containerID="849d18ee71153524147d205d968e16d57d57bfc865aca7bed0e5ffb3b6a46044" Mar 14 10:02:09 crc kubenswrapper[4869]: I0314 10:02:09.703977 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:02:09 crc kubenswrapper[4869]: E0314 10:02:09.704637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:02:16 crc kubenswrapper[4869]: I0314 10:02:16.705365 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:02:16 crc kubenswrapper[4869]: E0314 10:02:16.706231 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:02:18 crc kubenswrapper[4869]: I0314 10:02:18.704488 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:02:18 crc kubenswrapper[4869]: E0314 10:02:18.705039 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:02:23 crc kubenswrapper[4869]: I0314 10:02:23.703722 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:02:23 crc kubenswrapper[4869]: E0314 10:02:23.704341 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:02:27 crc kubenswrapper[4869]: I0314 10:02:27.719712 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:02:27 crc kubenswrapper[4869]: E0314 10:02:27.720462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:02:30 crc kubenswrapper[4869]: I0314 10:02:30.704116 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:02:30 crc kubenswrapper[4869]: E0314 10:02:30.704665 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:02:35 crc kubenswrapper[4869]: I0314 10:02:35.704619 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:02:35 crc kubenswrapper[4869]: E0314 10:02:35.705302 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:02:42 crc kubenswrapper[4869]: I0314 10:02:42.704647 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:02:42 crc kubenswrapper[4869]: E0314 10:02:42.705449 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:02:44 crc kubenswrapper[4869]: I0314 10:02:44.703681 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:02:44 crc kubenswrapper[4869]: E0314 10:02:44.704215 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:02:48 crc kubenswrapper[4869]: I0314 10:02:48.704312 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:02:48 crc kubenswrapper[4869]: E0314 10:02:48.704892 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:02:54 crc kubenswrapper[4869]: I0314 10:02:54.703865 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:02:54 crc kubenswrapper[4869]: E0314 10:02:54.704473 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:02:55 crc kubenswrapper[4869]: I0314 10:02:55.711296 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:02:56 crc kubenswrapper[4869]: I0314 10:02:56.365874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752"} Mar 14 10:03:01 crc kubenswrapper[4869]: I0314 10:03:01.704856 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:03:01 crc kubenswrapper[4869]: E0314 10:03:01.706122 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:03:04 crc kubenswrapper[4869]: I0314 10:03:04.539458 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:03:04 crc kubenswrapper[4869]: I0314 10:03:04.539903 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:03:05 crc kubenswrapper[4869]: I0314 10:03:05.444652 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" exitCode=1 Mar 14 10:03:05 crc kubenswrapper[4869]: I0314 10:03:05.444721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752"} Mar 14 10:03:05 crc kubenswrapper[4869]: I0314 10:03:05.445008 4869 scope.go:117] "RemoveContainer" containerID="e039b2a5ac3ec7384d5dc7f667e7926738ebd18d69eec89cbae3e21d1bfb2eb2" Mar 14 10:03:05 crc kubenswrapper[4869]: I0314 10:03:05.445565 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:03:05 crc kubenswrapper[4869]: E0314 10:03:05.445829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:03:08 crc kubenswrapper[4869]: I0314 10:03:08.704462 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:03:08 crc kubenswrapper[4869]: E0314 10:03:08.706242 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:03:13 crc kubenswrapper[4869]: I0314 10:03:13.703822 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:03:13 crc kubenswrapper[4869]: E0314 10:03:13.704589 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:03:14 crc kubenswrapper[4869]: I0314 10:03:14.539644 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:03:14 crc kubenswrapper[4869]: I0314 10:03:14.539699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:03:14 crc kubenswrapper[4869]: I0314 10:03:14.540623 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:03:14 crc kubenswrapper[4869]: E0314 10:03:14.540855 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:03:22 crc kubenswrapper[4869]: I0314 10:03:22.704341 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:03:22 crc kubenswrapper[4869]: E0314 10:03:22.705065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:03:24 crc kubenswrapper[4869]: I0314 10:03:24.704790 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:03:24 crc kubenswrapper[4869]: E0314 10:03:24.705658 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:03:27 crc kubenswrapper[4869]: I0314 10:03:27.714059 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:03:27 crc kubenswrapper[4869]: E0314 10:03:27.715985 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:03:35 crc kubenswrapper[4869]: I0314 10:03:35.704036 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:03:35 crc kubenswrapper[4869]: E0314 10:03:35.705123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:03:37 crc kubenswrapper[4869]: I0314 10:03:37.709099 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:03:38 crc kubenswrapper[4869]: I0314 10:03:38.734492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd"} Mar 14 10:03:42 crc kubenswrapper[4869]: I0314 10:03:42.704287 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:03:42 crc kubenswrapper[4869]: E0314 10:03:42.705327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:03:44 crc kubenswrapper[4869]: I0314 10:03:44.404484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:03:44 crc kubenswrapper[4869]: I0314 10:03:44.404813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:03:46 crc kubenswrapper[4869]: I0314 10:03:46.811464 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" exitCode=1 Mar 14 10:03:46 crc kubenswrapper[4869]: I0314 10:03:46.811624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd"} Mar 14 10:03:46 crc kubenswrapper[4869]: I0314 10:03:46.811924 4869 scope.go:117] "RemoveContainer" containerID="00ec6d46356dc460a77e477b2dc32c792db9caf84fd7b9878b12c12e1d40c327" Mar 14 10:03:46 crc kubenswrapper[4869]: I0314 10:03:46.812847 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:03:46 crc kubenswrapper[4869]: E0314 10:03:46.813136 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:03:50 crc kubenswrapper[4869]: I0314 10:03:50.703920 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:03:50 crc kubenswrapper[4869]: E0314 10:03:50.704641 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:03:53 crc kubenswrapper[4869]: I0314 10:03:53.703600 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:03:53 crc kubenswrapper[4869]: E0314 10:03:53.704252 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:03:54 crc kubenswrapper[4869]: I0314 10:03:54.405114 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:03:54 crc kubenswrapper[4869]: I0314 10:03:54.406201 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:03:54 crc kubenswrapper[4869]: I0314 10:03:54.406246 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:03:54 crc kubenswrapper[4869]: E0314 10:03:54.406459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:03:54 crc kubenswrapper[4869]: I0314 10:03:54.879076 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:03:54 crc kubenswrapper[4869]: E0314 10:03:54.879746 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.155473 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558044-jzltw"] Mar 14 10:04:00 crc kubenswrapper[4869]: E0314 10:04:00.157149 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5473421f-b4f7-4114-806a-cf3c0237fe1f" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.157223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5473421f-b4f7-4114-806a-cf3c0237fe1f" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.157450 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5473421f-b4f7-4114-806a-cf3c0237fe1f" containerName="oc" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.158228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.169579 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.169711 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.171765 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.173851 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-jzltw"] Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.256249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb87z\" (UniqueName: \"kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z\") pod \"auto-csr-approver-29558044-jzltw\" (UID: \"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab\") " pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.358277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb87z\" (UniqueName: \"kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z\") pod \"auto-csr-approver-29558044-jzltw\" (UID: \"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab\") " pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.377772 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb87z\" (UniqueName: \"kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z\") pod \"auto-csr-approver-29558044-jzltw\" (UID: \"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab\") " pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.481298 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:00 crc kubenswrapper[4869]: W0314 10:04:00.946259 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3ca2c8_8e60_439a_93aa_6c9c3d9678ab.slice/crio-e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a WatchSource:0}: Error finding container e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a: Status 404 returned error can't find the container with id e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a Mar 14 10:04:00 crc kubenswrapper[4869]: I0314 10:04:00.947399 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-jzltw"] Mar 14 10:04:01 crc kubenswrapper[4869]: I0314 10:04:01.981647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-jzltw" event={"ID":"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab","Type":"ContainerStarted","Data":"e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a"} Mar 14 10:04:02 crc kubenswrapper[4869]: I0314 10:04:02.995249 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" containerID="5768e85ea9639607f1ce9f04b0099bf155d09ce3455cff7166ce38af434df858" exitCode=0 Mar 14 10:04:02 crc kubenswrapper[4869]: I0314 10:04:02.995312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-jzltw" event={"ID":"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab","Type":"ContainerDied","Data":"5768e85ea9639607f1ce9f04b0099bf155d09ce3455cff7166ce38af434df858"} Mar 14 10:04:03 crc kubenswrapper[4869]: I0314 10:04:03.704331 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:04:03 crc kubenswrapper[4869]: E0314 10:04:03.704915 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:04:04 crc kubenswrapper[4869]: I0314 10:04:04.411499 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:04 crc kubenswrapper[4869]: I0314 10:04:04.549418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb87z\" (UniqueName: \"kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z\") pod \"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab\" (UID: \"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab\") " Mar 14 10:04:04 crc kubenswrapper[4869]: I0314 10:04:04.555427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z" (OuterVolumeSpecName: "kube-api-access-hb87z") pod "bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" (UID: "bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab"). InnerVolumeSpecName "kube-api-access-hb87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:04:04 crc kubenswrapper[4869]: I0314 10:04:04.651664 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb87z\" (UniqueName: \"kubernetes.io/projected/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab-kube-api-access-hb87z\") on node \"crc\" DevicePath \"\"" Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.017070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558044-jzltw" event={"ID":"bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab","Type":"ContainerDied","Data":"e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a"} Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.017104 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558044-jzltw" Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.017124 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e511a98c6052be0adf16f2eff7dbeee885d730a2343b367daf66912aa3d54f8a" Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.479765 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-dmq4r"] Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.487266 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558038-dmq4r"] Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.704053 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:04:05 crc kubenswrapper[4869]: E0314 10:04:05.705246 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:04:05 crc kubenswrapper[4869]: I0314 10:04:05.717205 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578c26ed-02c8-47f5-9e36-151cf98c6537" path="/var/lib/kubelet/pods/578c26ed-02c8-47f5-9e36-151cf98c6537/volumes" Mar 14 10:04:06 crc kubenswrapper[4869]: I0314 10:04:06.704259 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:04:06 crc kubenswrapper[4869]: E0314 10:04:06.704765 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:04:07 crc kubenswrapper[4869]: I0314 10:04:07.735459 4869 scope.go:117] "RemoveContainer" containerID="ad071eafd7933ab3e925a7f63ff49a7a8b58510926ad5766e20f8377b85e5c4a" Mar 14 10:04:15 crc kubenswrapper[4869]: I0314 10:04:15.704189 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:04:17 crc kubenswrapper[4869]: I0314 10:04:17.118852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08"} Mar 14 10:04:18 crc kubenswrapper[4869]: I0314 10:04:18.704756 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:04:18 crc kubenswrapper[4869]: E0314 10:04:18.705427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:04:19 crc kubenswrapper[4869]: I0314 10:04:19.704189 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:04:19 crc kubenswrapper[4869]: E0314 10:04:19.704771 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:04:33 crc kubenswrapper[4869]: I0314 10:04:33.704741 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:04:33 crc kubenswrapper[4869]: I0314 10:04:33.705281 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:04:33 crc kubenswrapper[4869]: E0314 10:04:33.705416 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:04:33 crc kubenswrapper[4869]: E0314 10:04:33.705886 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:04:46 crc kubenswrapper[4869]: I0314 10:04:46.704742 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:04:46 crc kubenswrapper[4869]: I0314 10:04:46.705531 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:04:46 crc kubenswrapper[4869]: E0314 10:04:46.705629 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:04:46 crc kubenswrapper[4869]: E0314 10:04:46.705835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:00 crc kubenswrapper[4869]: I0314 10:05:00.703689 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:05:00 crc kubenswrapper[4869]: I0314 10:05:00.704422 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:05:00 crc kubenswrapper[4869]: E0314 10:05:00.704571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:00 crc kubenswrapper[4869]: E0314 10:05:00.704743 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:05:11 crc kubenswrapper[4869]: I0314 10:05:11.705254 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:05:11 crc kubenswrapper[4869]: E0314 10:05:11.706600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:15 crc kubenswrapper[4869]: I0314 10:05:15.705086 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:05:15 crc kubenswrapper[4869]: E0314 10:05:15.705972 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:05:22 crc kubenswrapper[4869]: I0314 10:05:22.703948 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:05:22 crc kubenswrapper[4869]: E0314 10:05:22.704655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:26 crc kubenswrapper[4869]: I0314 10:05:26.704847 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:05:26 crc kubenswrapper[4869]: E0314 10:05:26.705680 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:05:34 crc kubenswrapper[4869]: I0314 10:05:34.704683 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:05:34 crc kubenswrapper[4869]: E0314 10:05:34.705637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:41 crc kubenswrapper[4869]: I0314 10:05:41.704903 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:05:41 crc kubenswrapper[4869]: E0314 10:05:41.705912 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:05:46 crc kubenswrapper[4869]: I0314 10:05:46.704672 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:05:46 crc kubenswrapper[4869]: E0314 10:05:46.705451 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:05:53 crc kubenswrapper[4869]: I0314 10:05:53.704806 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:05:53 crc kubenswrapper[4869]: E0314 10:05:53.705733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.156040 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558046-ff8np"] Mar 14 10:06:00 crc kubenswrapper[4869]: E0314 10:06:00.157170 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" containerName="oc" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.157188 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" containerName="oc" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.157449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" containerName="oc" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.158339 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.160603 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.160659 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.160611 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.166264 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-ff8np"] Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.328943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlszl\" (UniqueName: \"kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl\") pod \"auto-csr-approver-29558046-ff8np\" (UID: \"b49e262e-0287-4565-9886-6bdec491d7a9\") " pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.431653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlszl\" (UniqueName: \"kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl\") pod \"auto-csr-approver-29558046-ff8np\" (UID: \"b49e262e-0287-4565-9886-6bdec491d7a9\") " pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.452113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlszl\" (UniqueName: \"kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl\") pod \"auto-csr-approver-29558046-ff8np\" (UID: \"b49e262e-0287-4565-9886-6bdec491d7a9\") " pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.475250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.704095 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:06:00 crc kubenswrapper[4869]: E0314 10:06:00.704694 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.964163 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-ff8np"] Mar 14 10:06:00 crc kubenswrapper[4869]: I0314 10:06:00.971705 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:06:01 crc kubenswrapper[4869]: I0314 10:06:01.229860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-ff8np" event={"ID":"b49e262e-0287-4565-9886-6bdec491d7a9","Type":"ContainerStarted","Data":"cd07092005ee369d838315f63974693120f1c524451d480d0bd7cd534c818fa4"} Mar 14 10:06:02 crc kubenswrapper[4869]: I0314 10:06:02.240790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-ff8np" event={"ID":"b49e262e-0287-4565-9886-6bdec491d7a9","Type":"ContainerStarted","Data":"6172f4986f073bbe959bc78263dfcdc14ac2487be8c6e71ef7e0a830916d6869"} Mar 14 10:06:02 crc kubenswrapper[4869]: I0314 10:06:02.261743 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558046-ff8np" podStartSLOduration=1.381877862 podStartE2EDuration="2.261717573s" podCreationTimestamp="2026-03-14 10:06:00 +0000 UTC" firstStartedPulling="2026-03-14 10:06:00.971460265 +0000 UTC m=+4113.943742318" lastFinishedPulling="2026-03-14 10:06:01.851299926 +0000 UTC m=+4114.823582029" observedRunningTime="2026-03-14 10:06:02.255193883 +0000 UTC m=+4115.227475936" watchObservedRunningTime="2026-03-14 10:06:02.261717573 +0000 UTC m=+4115.233999626" Mar 14 10:06:03 crc kubenswrapper[4869]: I0314 10:06:03.252335 4869 generic.go:334] "Generic (PLEG): container finished" podID="b49e262e-0287-4565-9886-6bdec491d7a9" containerID="6172f4986f073bbe959bc78263dfcdc14ac2487be8c6e71ef7e0a830916d6869" exitCode=0 Mar 14 10:06:03 crc kubenswrapper[4869]: I0314 10:06:03.252854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-ff8np" event={"ID":"b49e262e-0287-4565-9886-6bdec491d7a9","Type":"ContainerDied","Data":"6172f4986f073bbe959bc78263dfcdc14ac2487be8c6e71ef7e0a830916d6869"} Mar 14 10:06:04 crc kubenswrapper[4869]: I0314 10:06:04.643670 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:04 crc kubenswrapper[4869]: I0314 10:06:04.703609 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:06:04 crc kubenswrapper[4869]: E0314 10:06:04.703944 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:06:04 crc kubenswrapper[4869]: I0314 10:06:04.827811 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlszl\" (UniqueName: \"kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl\") pod \"b49e262e-0287-4565-9886-6bdec491d7a9\" (UID: \"b49e262e-0287-4565-9886-6bdec491d7a9\") " Mar 14 10:06:04 crc kubenswrapper[4869]: I0314 10:06:04.832958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl" (OuterVolumeSpecName: "kube-api-access-zlszl") pod "b49e262e-0287-4565-9886-6bdec491d7a9" (UID: "b49e262e-0287-4565-9886-6bdec491d7a9"). InnerVolumeSpecName "kube-api-access-zlszl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:06:04 crc kubenswrapper[4869]: I0314 10:06:04.932086 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlszl\" (UniqueName: \"kubernetes.io/projected/b49e262e-0287-4565-9886-6bdec491d7a9-kube-api-access-zlszl\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.277130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558046-ff8np" event={"ID":"b49e262e-0287-4565-9886-6bdec491d7a9","Type":"ContainerDied","Data":"cd07092005ee369d838315f63974693120f1c524451d480d0bd7cd534c818fa4"} Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.277179 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd07092005ee369d838315f63974693120f1c524451d480d0bd7cd534c818fa4" Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.277240 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558046-ff8np" Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.350415 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-8dwl5"] Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.364315 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558040-8dwl5"] Mar 14 10:06:05 crc kubenswrapper[4869]: I0314 10:06:05.715581 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31cbd2d2-f710-4534-99d9-8263ee4cf905" path="/var/lib/kubelet/pods/31cbd2d2-f710-4534-99d9-8263ee4cf905/volumes" Mar 14 10:06:11 crc kubenswrapper[4869]: I0314 10:06:11.708281 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:06:11 crc kubenswrapper[4869]: E0314 10:06:11.709737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.903800 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:15 crc kubenswrapper[4869]: E0314 10:06:15.906240 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49e262e-0287-4565-9886-6bdec491d7a9" containerName="oc" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.906345 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49e262e-0287-4565-9886-6bdec491d7a9" containerName="oc" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.906728 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49e262e-0287-4565-9886-6bdec491d7a9" containerName="oc" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.908853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.913821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.973889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.973929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zlp\" (UniqueName: \"kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:15 crc kubenswrapper[4869]: I0314 10:06:15.974080 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.075140 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.075188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78zlp\" (UniqueName: \"kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.075295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.075776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.075822 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.104675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78zlp\" (UniqueName: \"kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp\") pod \"redhat-operators-7d8w9\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.232333 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.705066 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:06:16 crc kubenswrapper[4869]: E0314 10:06:16.705565 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:06:16 crc kubenswrapper[4869]: I0314 10:06:16.766193 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:17 crc kubenswrapper[4869]: I0314 10:06:17.415001 4869 generic.go:334] "Generic (PLEG): container finished" podID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerID="282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721" exitCode=0 Mar 14 10:06:17 crc kubenswrapper[4869]: I0314 10:06:17.415424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerDied","Data":"282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721"} Mar 14 10:06:17 crc kubenswrapper[4869]: I0314 10:06:17.415573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerStarted","Data":"54df8bab41034e65dfb6a5555eb02d599ec2b3a2c6129a7b66533b8e1aeb3651"} Mar 14 10:06:18 crc kubenswrapper[4869]: I0314 10:06:18.426711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerStarted","Data":"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf"} Mar 14 10:06:20 crc kubenswrapper[4869]: I0314 10:06:20.446340 4869 generic.go:334] "Generic (PLEG): container finished" podID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerID="fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf" exitCode=0 Mar 14 10:06:20 crc kubenswrapper[4869]: I0314 10:06:20.446420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerDied","Data":"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf"} Mar 14 10:06:21 crc kubenswrapper[4869]: I0314 10:06:21.462106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerStarted","Data":"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073"} Mar 14 10:06:21 crc kubenswrapper[4869]: I0314 10:06:21.500039 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7d8w9" podStartSLOduration=2.75683402 podStartE2EDuration="6.500022099s" podCreationTimestamp="2026-03-14 10:06:15 +0000 UTC" firstStartedPulling="2026-03-14 10:06:17.41813882 +0000 UTC m=+4130.390420873" lastFinishedPulling="2026-03-14 10:06:21.161326879 +0000 UTC m=+4134.133608952" observedRunningTime="2026-03-14 10:06:21.489984933 +0000 UTC m=+4134.462267006" watchObservedRunningTime="2026-03-14 10:06:21.500022099 +0000 UTC m=+4134.472304162" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.283426 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.291322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.293140 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.442648 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr464\" (UniqueName: \"kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.442954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.443102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.545295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr464\" (UniqueName: \"kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.545390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.545425 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.545914 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.546007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.575293 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr464\" (UniqueName: \"kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464\") pod \"certified-operators-sbqtd\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.622552 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:23 crc kubenswrapper[4869]: I0314 10:06:23.706608 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:06:23 crc kubenswrapper[4869]: E0314 10:06:23.706906 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:06:24 crc kubenswrapper[4869]: W0314 10:06:24.345952 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd26d547e_4d64_4622_8a19_0529f324178c.slice/crio-926c47892f8c133cae4ab90349dca100c926e24c81655347abee5ad2b9c63bc6 WatchSource:0}: Error finding container 926c47892f8c133cae4ab90349dca100c926e24c81655347abee5ad2b9c63bc6: Status 404 returned error can't find the container with id 926c47892f8c133cae4ab90349dca100c926e24c81655347abee5ad2b9c63bc6 Mar 14 10:06:24 crc kubenswrapper[4869]: I0314 10:06:24.346409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:24 crc kubenswrapper[4869]: I0314 10:06:24.487455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerStarted","Data":"926c47892f8c133cae4ab90349dca100c926e24c81655347abee5ad2b9c63bc6"} Mar 14 10:06:25 crc kubenswrapper[4869]: I0314 10:06:25.498754 4869 generic.go:334] "Generic (PLEG): container finished" podID="d26d547e-4d64-4622-8a19-0529f324178c" containerID="6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842" exitCode=0 Mar 14 10:06:25 crc kubenswrapper[4869]: I0314 10:06:25.498835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerDied","Data":"6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842"} Mar 14 10:06:26 crc kubenswrapper[4869]: I0314 10:06:26.232663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:26 crc kubenswrapper[4869]: I0314 10:06:26.232727 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:26 crc kubenswrapper[4869]: I0314 10:06:26.514855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerStarted","Data":"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867"} Mar 14 10:06:27 crc kubenswrapper[4869]: I0314 10:06:27.285692 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7d8w9" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" probeResult="failure" output=< Mar 14 10:06:27 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 10:06:27 crc kubenswrapper[4869]: > Mar 14 10:06:27 crc kubenswrapper[4869]: I0314 10:06:27.527351 4869 generic.go:334] "Generic (PLEG): container finished" podID="d26d547e-4d64-4622-8a19-0529f324178c" containerID="5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867" exitCode=0 Mar 14 10:06:27 crc kubenswrapper[4869]: I0314 10:06:27.527394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerDied","Data":"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867"} Mar 14 10:06:29 crc kubenswrapper[4869]: I0314 10:06:29.557765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerStarted","Data":"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a"} Mar 14 10:06:29 crc kubenswrapper[4869]: I0314 10:06:29.587776 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbqtd" podStartSLOduration=3.7006049819999998 podStartE2EDuration="6.587751533s" podCreationTimestamp="2026-03-14 10:06:23 +0000 UTC" firstStartedPulling="2026-03-14 10:06:25.501227481 +0000 UTC m=+4138.473509544" lastFinishedPulling="2026-03-14 10:06:28.388374042 +0000 UTC m=+4141.360656095" observedRunningTime="2026-03-14 10:06:29.583325014 +0000 UTC m=+4142.555607097" watchObservedRunningTime="2026-03-14 10:06:29.587751533 +0000 UTC m=+4142.560033616" Mar 14 10:06:31 crc kubenswrapper[4869]: I0314 10:06:31.704480 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:06:31 crc kubenswrapper[4869]: E0314 10:06:31.705113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:06:33 crc kubenswrapper[4869]: I0314 10:06:33.623608 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:33 crc kubenswrapper[4869]: I0314 10:06:33.623684 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:33 crc kubenswrapper[4869]: I0314 10:06:33.717587 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:34 crc kubenswrapper[4869]: I0314 10:06:34.684474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:34 crc kubenswrapper[4869]: I0314 10:06:34.738750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:36 crc kubenswrapper[4869]: I0314 10:06:36.639724 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbqtd" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="registry-server" containerID="cri-o://cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a" gracePeriod=2 Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.154745 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.246379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content\") pod \"d26d547e-4d64-4622-8a19-0529f324178c\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.246765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities\") pod \"d26d547e-4d64-4622-8a19-0529f324178c\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.246827 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr464\" (UniqueName: \"kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464\") pod \"d26d547e-4d64-4622-8a19-0529f324178c\" (UID: \"d26d547e-4d64-4622-8a19-0529f324178c\") " Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.247408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities" (OuterVolumeSpecName: "utilities") pod "d26d547e-4d64-4622-8a19-0529f324178c" (UID: "d26d547e-4d64-4622-8a19-0529f324178c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.248220 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.256444 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464" (OuterVolumeSpecName: "kube-api-access-wr464") pod "d26d547e-4d64-4622-8a19-0529f324178c" (UID: "d26d547e-4d64-4622-8a19-0529f324178c"). InnerVolumeSpecName "kube-api-access-wr464". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.287783 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7d8w9" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" probeResult="failure" output=< Mar 14 10:06:37 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 10:06:37 crc kubenswrapper[4869]: > Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.295846 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d26d547e-4d64-4622-8a19-0529f324178c" (UID: "d26d547e-4d64-4622-8a19-0529f324178c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.350321 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr464\" (UniqueName: \"kubernetes.io/projected/d26d547e-4d64-4622-8a19-0529f324178c-kube-api-access-wr464\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.350356 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26d547e-4d64-4622-8a19-0529f324178c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.652316 4869 generic.go:334] "Generic (PLEG): container finished" podID="d26d547e-4d64-4622-8a19-0529f324178c" containerID="cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a" exitCode=0 Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.652384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerDied","Data":"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a"} Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.652433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbqtd" event={"ID":"d26d547e-4d64-4622-8a19-0529f324178c","Type":"ContainerDied","Data":"926c47892f8c133cae4ab90349dca100c926e24c81655347abee5ad2b9c63bc6"} Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.652441 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbqtd" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.652463 4869 scope.go:117] "RemoveContainer" containerID="cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.694120 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.696161 4869 scope.go:117] "RemoveContainer" containerID="5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.702611 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbqtd"] Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.709671 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:06:37 crc kubenswrapper[4869]: E0314 10:06:37.710014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.720666 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26d547e-4d64-4622-8a19-0529f324178c" path="/var/lib/kubelet/pods/d26d547e-4d64-4622-8a19-0529f324178c/volumes" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.726171 4869 scope.go:117] "RemoveContainer" containerID="6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.778642 4869 scope.go:117] "RemoveContainer" containerID="cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a" Mar 14 10:06:37 crc kubenswrapper[4869]: E0314 10:06:37.779038 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a\": container with ID starting with cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a not found: ID does not exist" containerID="cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.779073 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a"} err="failed to get container status \"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a\": rpc error: code = NotFound desc = could not find container \"cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a\": container with ID starting with cd788386b19cc51bc9ce03543e0d3414129024ed71f3b150de7ec575242a2a4a not found: ID does not exist" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.779093 4869 scope.go:117] "RemoveContainer" containerID="5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867" Mar 14 10:06:37 crc kubenswrapper[4869]: E0314 10:06:37.779493 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867\": container with ID starting with 5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867 not found: ID does not exist" containerID="5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.779550 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867"} err="failed to get container status \"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867\": rpc error: code = NotFound desc = could not find container \"5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867\": container with ID starting with 5f581a5e061616a9a88ad9481dbb8f5919a651791510e85717b2d298d888e867 not found: ID does not exist" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.779576 4869 scope.go:117] "RemoveContainer" containerID="6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842" Mar 14 10:06:37 crc kubenswrapper[4869]: E0314 10:06:37.780061 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842\": container with ID starting with 6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842 not found: ID does not exist" containerID="6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842" Mar 14 10:06:37 crc kubenswrapper[4869]: I0314 10:06:37.780097 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842"} err="failed to get container status \"6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842\": rpc error: code = NotFound desc = could not find container \"6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842\": container with ID starting with 6a00c4151cebeefc00994120d203e7e1663dc308d9557ebff512736528d1d842 not found: ID does not exist" Mar 14 10:06:39 crc kubenswrapper[4869]: I0314 10:06:39.605099 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:06:39 crc kubenswrapper[4869]: I0314 10:06:39.605692 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:06:42 crc kubenswrapper[4869]: I0314 10:06:42.705411 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:06:42 crc kubenswrapper[4869]: E0314 10:06:42.706757 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:06:46 crc kubenswrapper[4869]: I0314 10:06:46.281725 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:46 crc kubenswrapper[4869]: I0314 10:06:46.960939 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:47 crc kubenswrapper[4869]: I0314 10:06:47.103673 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:47 crc kubenswrapper[4869]: I0314 10:06:47.800501 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7d8w9" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" containerID="cri-o://7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073" gracePeriod=2 Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.277627 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.301697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities\") pod \"29f4ab20-bd19-4700-a3b9-e33f12001037\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.301765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content\") pod \"29f4ab20-bd19-4700-a3b9-e33f12001037\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.302000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78zlp\" (UniqueName: \"kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp\") pod \"29f4ab20-bd19-4700-a3b9-e33f12001037\" (UID: \"29f4ab20-bd19-4700-a3b9-e33f12001037\") " Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.303179 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities" (OuterVolumeSpecName: "utilities") pod "29f4ab20-bd19-4700-a3b9-e33f12001037" (UID: "29f4ab20-bd19-4700-a3b9-e33f12001037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.308632 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp" (OuterVolumeSpecName: "kube-api-access-78zlp") pod "29f4ab20-bd19-4700-a3b9-e33f12001037" (UID: "29f4ab20-bd19-4700-a3b9-e33f12001037"). InnerVolumeSpecName "kube-api-access-78zlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.404042 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.404126 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78zlp\" (UniqueName: \"kubernetes.io/projected/29f4ab20-bd19-4700-a3b9-e33f12001037-kube-api-access-78zlp\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.440600 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29f4ab20-bd19-4700-a3b9-e33f12001037" (UID: "29f4ab20-bd19-4700-a3b9-e33f12001037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.505791 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29f4ab20-bd19-4700-a3b9-e33f12001037-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.704411 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:06:48 crc kubenswrapper[4869]: E0314 10:06:48.704825 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.815038 4869 generic.go:334] "Generic (PLEG): container finished" podID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerID="7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073" exitCode=0 Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.815126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerDied","Data":"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073"} Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.815163 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7d8w9" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.815177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7d8w9" event={"ID":"29f4ab20-bd19-4700-a3b9-e33f12001037","Type":"ContainerDied","Data":"54df8bab41034e65dfb6a5555eb02d599ec2b3a2c6129a7b66533b8e1aeb3651"} Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.815195 4869 scope.go:117] "RemoveContainer" containerID="7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.848983 4869 scope.go:117] "RemoveContainer" containerID="fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.862083 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.875280 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7d8w9"] Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.889822 4869 scope.go:117] "RemoveContainer" containerID="282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.923263 4869 scope.go:117] "RemoveContainer" containerID="7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073" Mar 14 10:06:48 crc kubenswrapper[4869]: E0314 10:06:48.923763 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073\": container with ID starting with 7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073 not found: ID does not exist" containerID="7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.923847 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073"} err="failed to get container status \"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073\": rpc error: code = NotFound desc = could not find container \"7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073\": container with ID starting with 7f3a0c7cbe135a3a5a51f5190aa49fe1fa317bd7decfa4e029c679f5c3da6073 not found: ID does not exist" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.923893 4869 scope.go:117] "RemoveContainer" containerID="fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf" Mar 14 10:06:48 crc kubenswrapper[4869]: E0314 10:06:48.924158 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf\": container with ID starting with fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf not found: ID does not exist" containerID="fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.924209 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf"} err="failed to get container status \"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf\": rpc error: code = NotFound desc = could not find container \"fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf\": container with ID starting with fb1b38aaa703d8a885b66f0a0fa06b1e748af8ba2794c984607e7e6528cc59bf not found: ID does not exist" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.924226 4869 scope.go:117] "RemoveContainer" containerID="282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721" Mar 14 10:06:48 crc kubenswrapper[4869]: E0314 10:06:48.924724 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721\": container with ID starting with 282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721 not found: ID does not exist" containerID="282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721" Mar 14 10:06:48 crc kubenswrapper[4869]: I0314 10:06:48.924781 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721"} err="failed to get container status \"282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721\": rpc error: code = NotFound desc = could not find container \"282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721\": container with ID starting with 282e539a0945db57ee460c84aa52fb377b2bf468ca78377d82059b046f1b9721 not found: ID does not exist" Mar 14 10:06:49 crc kubenswrapper[4869]: I0314 10:06:49.715102 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" path="/var/lib/kubelet/pods/29f4ab20-bd19-4700-a3b9-e33f12001037/volumes" Mar 14 10:06:56 crc kubenswrapper[4869]: I0314 10:06:56.705470 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:06:56 crc kubenswrapper[4869]: E0314 10:06:56.707017 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:07:02 crc kubenswrapper[4869]: I0314 10:07:02.705078 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:07:02 crc kubenswrapper[4869]: E0314 10:07:02.706225 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:07:07 crc kubenswrapper[4869]: I0314 10:07:07.998800 4869 scope.go:117] "RemoveContainer" containerID="ec7e79cc1a450d522c50bb5a290017559a119d26044937778117a0fd41383ec0" Mar 14 10:07:09 crc kubenswrapper[4869]: I0314 10:07:09.605621 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:07:09 crc kubenswrapper[4869]: I0314 10:07:09.607125 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:07:10 crc kubenswrapper[4869]: I0314 10:07:10.703954 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:07:10 crc kubenswrapper[4869]: E0314 10:07:10.706026 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:07:13 crc kubenswrapper[4869]: I0314 10:07:13.719795 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:07:13 crc kubenswrapper[4869]: E0314 10:07:13.721046 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:07:25 crc kubenswrapper[4869]: I0314 10:07:25.704853 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:07:25 crc kubenswrapper[4869]: E0314 10:07:25.705737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:07:26 crc kubenswrapper[4869]: I0314 10:07:26.704755 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:07:26 crc kubenswrapper[4869]: E0314 10:07:26.705083 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:07:39 crc kubenswrapper[4869]: I0314 10:07:39.604994 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:07:39 crc kubenswrapper[4869]: I0314 10:07:39.606841 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:07:39 crc kubenswrapper[4869]: I0314 10:07:39.607121 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 10:07:39 crc kubenswrapper[4869]: I0314 10:07:39.608130 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:07:39 crc kubenswrapper[4869]: I0314 10:07:39.608295 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08" gracePeriod=600 Mar 14 10:07:40 crc kubenswrapper[4869]: I0314 10:07:40.572221 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08" exitCode=0 Mar 14 10:07:40 crc kubenswrapper[4869]: I0314 10:07:40.572314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08"} Mar 14 10:07:40 crc kubenswrapper[4869]: I0314 10:07:40.572650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400"} Mar 14 10:07:40 crc kubenswrapper[4869]: I0314 10:07:40.572672 4869 scope.go:117] "RemoveContainer" containerID="eec4c91d0f4cd6126d6d66170b2051925af8627c800cfff4a60baec51b0c7738" Mar 14 10:07:40 crc kubenswrapper[4869]: I0314 10:07:40.704496 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:07:40 crc kubenswrapper[4869]: E0314 10:07:40.704724 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:07:41 crc kubenswrapper[4869]: I0314 10:07:41.704344 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:07:41 crc kubenswrapper[4869]: E0314 10:07:41.705055 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:07:53 crc kubenswrapper[4869]: I0314 10:07:53.704849 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:07:53 crc kubenswrapper[4869]: E0314 10:07:53.705814 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:07:55 crc kubenswrapper[4869]: I0314 10:07:55.705424 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:07:55 crc kubenswrapper[4869]: E0314 10:07:55.706281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.157019 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558048-dtwl6"] Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159191 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="extract-utilities" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.159285 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="extract-utilities" Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159355 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.159420 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159532 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="extract-content" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.159613 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="extract-content" Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159692 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="extract-content" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.159759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="extract-content" Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159845 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="extract-utilities" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.159910 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="extract-utilities" Mar 14 10:08:00 crc kubenswrapper[4869]: E0314 10:08:00.159994 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.160062 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.160371 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f4ab20-bd19-4700-a3b9-e33f12001037" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.160471 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26d547e-4d64-4622-8a19-0529f324178c" containerName="registry-server" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.161402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.164740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.165140 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.165376 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.168816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-dtwl6"] Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.316721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4l8\" (UniqueName: \"kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8\") pod \"auto-csr-approver-29558048-dtwl6\" (UID: \"9643d2fb-06e0-45f7-94e4-12219ba833a7\") " pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.419136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4l8\" (UniqueName: \"kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8\") pod \"auto-csr-approver-29558048-dtwl6\" (UID: \"9643d2fb-06e0-45f7-94e4-12219ba833a7\") " pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.453138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4l8\" (UniqueName: \"kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8\") pod \"auto-csr-approver-29558048-dtwl6\" (UID: \"9643d2fb-06e0-45f7-94e4-12219ba833a7\") " pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.485133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:00 crc kubenswrapper[4869]: I0314 10:08:00.926482 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-dtwl6"] Mar 14 10:08:01 crc kubenswrapper[4869]: I0314 10:08:01.810590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" event={"ID":"9643d2fb-06e0-45f7-94e4-12219ba833a7","Type":"ContainerStarted","Data":"1779a7a2c8f56d20f8905a888998c7134b6fb5d4aab628fe9675fd15969f3edf"} Mar 14 10:08:02 crc kubenswrapper[4869]: I0314 10:08:02.822076 4869 generic.go:334] "Generic (PLEG): container finished" podID="9643d2fb-06e0-45f7-94e4-12219ba833a7" containerID="f64325bae5547e1d0f9db7227f11eefbb5f144662229d7609bec285ed77bd1c6" exitCode=0 Mar 14 10:08:02 crc kubenswrapper[4869]: I0314 10:08:02.822139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" event={"ID":"9643d2fb-06e0-45f7-94e4-12219ba833a7","Type":"ContainerDied","Data":"f64325bae5547e1d0f9db7227f11eefbb5f144662229d7609bec285ed77bd1c6"} Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.202026 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.296580 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr4l8\" (UniqueName: \"kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8\") pod \"9643d2fb-06e0-45f7-94e4-12219ba833a7\" (UID: \"9643d2fb-06e0-45f7-94e4-12219ba833a7\") " Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.302533 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8" (OuterVolumeSpecName: "kube-api-access-nr4l8") pod "9643d2fb-06e0-45f7-94e4-12219ba833a7" (UID: "9643d2fb-06e0-45f7-94e4-12219ba833a7"). InnerVolumeSpecName "kube-api-access-nr4l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.399591 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr4l8\" (UniqueName: \"kubernetes.io/projected/9643d2fb-06e0-45f7-94e4-12219ba833a7-kube-api-access-nr4l8\") on node \"crc\" DevicePath \"\"" Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.845124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" event={"ID":"9643d2fb-06e0-45f7-94e4-12219ba833a7","Type":"ContainerDied","Data":"1779a7a2c8f56d20f8905a888998c7134b6fb5d4aab628fe9675fd15969f3edf"} Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.845168 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1779a7a2c8f56d20f8905a888998c7134b6fb5d4aab628fe9675fd15969f3edf" Mar 14 10:08:04 crc kubenswrapper[4869]: I0314 10:08:04.845662 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558048-dtwl6" Mar 14 10:08:05 crc kubenswrapper[4869]: I0314 10:08:05.287750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-kd54j"] Mar 14 10:08:05 crc kubenswrapper[4869]: I0314 10:08:05.297709 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558042-kd54j"] Mar 14 10:08:05 crc kubenswrapper[4869]: I0314 10:08:05.715983 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5473421f-b4f7-4114-806a-cf3c0237fe1f" path="/var/lib/kubelet/pods/5473421f-b4f7-4114-806a-cf3c0237fe1f/volumes" Mar 14 10:08:07 crc kubenswrapper[4869]: I0314 10:08:07.708927 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:08:08 crc kubenswrapper[4869]: I0314 10:08:08.173210 4869 scope.go:117] "RemoveContainer" containerID="e453a24ea042b8ae141607eac2050585bc5a425e9ae20fa003288148496b2282" Mar 14 10:08:08 crc kubenswrapper[4869]: I0314 10:08:08.704632 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:08:08 crc kubenswrapper[4869]: E0314 10:08:08.705239 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:08:08 crc kubenswrapper[4869]: I0314 10:08:08.882484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0"} Mar 14 10:08:14 crc kubenswrapper[4869]: I0314 10:08:14.540684 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:08:14 crc kubenswrapper[4869]: I0314 10:08:14.541289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:08:16 crc kubenswrapper[4869]: I0314 10:08:16.963271 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" exitCode=1 Mar 14 10:08:16 crc kubenswrapper[4869]: I0314 10:08:16.963368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0"} Mar 14 10:08:16 crc kubenswrapper[4869]: I0314 10:08:16.963752 4869 scope.go:117] "RemoveContainer" containerID="2ab3835a33b146a407d548e29bf56aec2239a0d3b8ab20da7c0e7b3a7d753752" Mar 14 10:08:16 crc kubenswrapper[4869]: I0314 10:08:16.965384 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:08:16 crc kubenswrapper[4869]: E0314 10:08:16.966002 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:08:22 crc kubenswrapper[4869]: I0314 10:08:22.703685 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:08:22 crc kubenswrapper[4869]: E0314 10:08:22.704722 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:08:24 crc kubenswrapper[4869]: I0314 10:08:24.538698 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:08:24 crc kubenswrapper[4869]: I0314 10:08:24.539082 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:08:24 crc kubenswrapper[4869]: I0314 10:08:24.540938 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:08:24 crc kubenswrapper[4869]: E0314 10:08:24.541452 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:08:36 crc kubenswrapper[4869]: I0314 10:08:36.704179 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:08:36 crc kubenswrapper[4869]: I0314 10:08:36.704814 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:08:36 crc kubenswrapper[4869]: E0314 10:08:36.705017 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:08:36 crc kubenswrapper[4869]: E0314 10:08:36.705288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:08:50 crc kubenswrapper[4869]: I0314 10:08:50.703872 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:08:50 crc kubenswrapper[4869]: I0314 10:08:50.704539 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:08:50 crc kubenswrapper[4869]: E0314 10:08:50.704782 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:08:51 crc kubenswrapper[4869]: I0314 10:08:51.329501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e"} Mar 14 10:08:54 crc kubenswrapper[4869]: I0314 10:08:54.405068 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:08:54 crc kubenswrapper[4869]: I0314 10:08:54.405602 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:08:57 crc kubenswrapper[4869]: I0314 10:08:57.957080 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:08:57 crc kubenswrapper[4869]: E0314 10:08:57.957979 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9643d2fb-06e0-45f7-94e4-12219ba833a7" containerName="oc" Mar 14 10:08:57 crc kubenswrapper[4869]: I0314 10:08:57.957992 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9643d2fb-06e0-45f7-94e4-12219ba833a7" containerName="oc" Mar 14 10:08:57 crc kubenswrapper[4869]: I0314 10:08:57.958181 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9643d2fb-06e0-45f7-94e4-12219ba833a7" containerName="oc" Mar 14 10:08:57 crc kubenswrapper[4869]: I0314 10:08:57.959486 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:57 crc kubenswrapper[4869]: I0314 10:08:57.981293 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.089128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.089238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.089319 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbdl\" (UniqueName: \"kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.191035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.191084 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.191115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzbdl\" (UniqueName: \"kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.191472 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.191550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.211493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzbdl\" (UniqueName: \"kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl\") pod \"redhat-marketplace-b2x7h\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.277188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:08:58 crc kubenswrapper[4869]: I0314 10:08:58.772907 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.431375 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" exitCode=1 Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.431466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e"} Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.431805 4869 scope.go:117] "RemoveContainer" containerID="94f96f1749f73f0fb1a6bcf72c12e4216ff8a5effd8cfd2e067c86db1f8d37dd" Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.432423 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:08:59 crc kubenswrapper[4869]: E0314 10:08:59.432807 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.435279 4869 generic.go:334] "Generic (PLEG): container finished" podID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerID="5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a" exitCode=0 Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.435322 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerDied","Data":"5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a"} Mar 14 10:08:59 crc kubenswrapper[4869]: I0314 10:08:59.435352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerStarted","Data":"bc195bb3ffddb90c83915d1e97d837ac68dc82ab2b545196e50281b767d1d703"} Mar 14 10:09:00 crc kubenswrapper[4869]: I0314 10:09:00.449036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerStarted","Data":"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0"} Mar 14 10:09:01 crc kubenswrapper[4869]: I0314 10:09:01.459535 4869 generic.go:334] "Generic (PLEG): container finished" podID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerID="87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0" exitCode=0 Mar 14 10:09:01 crc kubenswrapper[4869]: I0314 10:09:01.459600 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerDied","Data":"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0"} Mar 14 10:09:02 crc kubenswrapper[4869]: I0314 10:09:02.470298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerStarted","Data":"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640"} Mar 14 10:09:02 crc kubenswrapper[4869]: I0314 10:09:02.491122 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2x7h" podStartSLOduration=3.097511316 podStartE2EDuration="5.491100372s" podCreationTimestamp="2026-03-14 10:08:57 +0000 UTC" firstStartedPulling="2026-03-14 10:08:59.467778944 +0000 UTC m=+4292.440061047" lastFinishedPulling="2026-03-14 10:09:01.86136804 +0000 UTC m=+4294.833650103" observedRunningTime="2026-03-14 10:09:02.490128988 +0000 UTC m=+4295.462411091" watchObservedRunningTime="2026-03-14 10:09:02.491100372 +0000 UTC m=+4295.463382435" Mar 14 10:09:03 crc kubenswrapper[4869]: I0314 10:09:03.704112 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:09:03 crc kubenswrapper[4869]: E0314 10:09:03.704397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:09:04 crc kubenswrapper[4869]: I0314 10:09:04.404452 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:09:04 crc kubenswrapper[4869]: I0314 10:09:04.405087 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:09:04 crc kubenswrapper[4869]: I0314 10:09:04.406284 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:09:04 crc kubenswrapper[4869]: E0314 10:09:04.406852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:09:08 crc kubenswrapper[4869]: I0314 10:09:08.277557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:08 crc kubenswrapper[4869]: I0314 10:09:08.278245 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:08 crc kubenswrapper[4869]: I0314 10:09:08.352351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:09 crc kubenswrapper[4869]: I0314 10:09:09.157319 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:09 crc kubenswrapper[4869]: I0314 10:09:09.212647 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:09:10 crc kubenswrapper[4869]: I0314 10:09:10.544787 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2x7h" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="registry-server" containerID="cri-o://3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640" gracePeriod=2 Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.024429 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.084883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities\") pod \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.085010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzbdl\" (UniqueName: \"kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl\") pod \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.085128 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content\") pod \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\" (UID: \"01e1be64-724f-490f-85cc-6fe7d5dfd7ca\") " Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.086768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities" (OuterVolumeSpecName: "utilities") pod "01e1be64-724f-490f-85cc-6fe7d5dfd7ca" (UID: "01e1be64-724f-490f-85cc-6fe7d5dfd7ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.095826 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl" (OuterVolumeSpecName: "kube-api-access-nzbdl") pod "01e1be64-724f-490f-85cc-6fe7d5dfd7ca" (UID: "01e1be64-724f-490f-85cc-6fe7d5dfd7ca"). InnerVolumeSpecName "kube-api-access-nzbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.162466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01e1be64-724f-490f-85cc-6fe7d5dfd7ca" (UID: "01e1be64-724f-490f-85cc-6fe7d5dfd7ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.187733 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.187765 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzbdl\" (UniqueName: \"kubernetes.io/projected/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-kube-api-access-nzbdl\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.187776 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e1be64-724f-490f-85cc-6fe7d5dfd7ca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.557474 4869 generic.go:334] "Generic (PLEG): container finished" podID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerID="3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640" exitCode=0 Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.557540 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerDied","Data":"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640"} Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.557550 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2x7h" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.557578 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2x7h" event={"ID":"01e1be64-724f-490f-85cc-6fe7d5dfd7ca","Type":"ContainerDied","Data":"bc195bb3ffddb90c83915d1e97d837ac68dc82ab2b545196e50281b767d1d703"} Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.557596 4869 scope.go:117] "RemoveContainer" containerID="3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.596326 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.603419 4869 scope.go:117] "RemoveContainer" containerID="87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.604155 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2x7h"] Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.630693 4869 scope.go:117] "RemoveContainer" containerID="5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.699222 4869 scope.go:117] "RemoveContainer" containerID="3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640" Mar 14 10:09:11 crc kubenswrapper[4869]: E0314 10:09:11.699682 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640\": container with ID starting with 3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640 not found: ID does not exist" containerID="3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.699724 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640"} err="failed to get container status \"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640\": rpc error: code = NotFound desc = could not find container \"3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640\": container with ID starting with 3980439882d942579a50d81e53d0b8b6545bcb64fe4acc1e5ed5b291b5ebc640 not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.699751 4869 scope.go:117] "RemoveContainer" containerID="87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0" Mar 14 10:09:11 crc kubenswrapper[4869]: E0314 10:09:11.700129 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0\": container with ID starting with 87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0 not found: ID does not exist" containerID="87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.700214 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0"} err="failed to get container status \"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0\": rpc error: code = NotFound desc = could not find container \"87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0\": container with ID starting with 87f8689af9d3f5cdde928614e0f4ce61f82841d8362828e60acfa987549d07d0 not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.700292 4869 scope.go:117] "RemoveContainer" containerID="5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a" Mar 14 10:09:11 crc kubenswrapper[4869]: E0314 10:09:11.702624 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a\": container with ID starting with 5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a not found: ID does not exist" containerID="5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.702651 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a"} err="failed to get container status \"5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a\": rpc error: code = NotFound desc = could not find container \"5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a\": container with ID starting with 5d3319b0bae6c8213f8c35ec2c52185d06d711af957159a8aa67c0ae8df3b29a not found: ID does not exist" Mar 14 10:09:11 crc kubenswrapper[4869]: I0314 10:09:11.719370 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" path="/var/lib/kubelet/pods/01e1be64-724f-490f-85cc-6fe7d5dfd7ca/volumes" Mar 14 10:09:15 crc kubenswrapper[4869]: I0314 10:09:15.704617 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:09:15 crc kubenswrapper[4869]: E0314 10:09:15.705616 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:09:16 crc kubenswrapper[4869]: I0314 10:09:16.705394 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:09:16 crc kubenswrapper[4869]: E0314 10:09:16.706873 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:09:26 crc kubenswrapper[4869]: I0314 10:09:26.704434 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:09:26 crc kubenswrapper[4869]: E0314 10:09:26.705756 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:09:27 crc kubenswrapper[4869]: I0314 10:09:27.718492 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:09:27 crc kubenswrapper[4869]: E0314 10:09:27.719396 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:09:39 crc kubenswrapper[4869]: I0314 10:09:39.604766 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:09:39 crc kubenswrapper[4869]: I0314 10:09:39.605335 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:09:39 crc kubenswrapper[4869]: I0314 10:09:39.704582 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:09:39 crc kubenswrapper[4869]: E0314 10:09:39.704855 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:09:40 crc kubenswrapper[4869]: I0314 10:09:40.706020 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:09:40 crc kubenswrapper[4869]: E0314 10:09:40.708253 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.536986 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:09:49 crc kubenswrapper[4869]: E0314 10:09:49.538241 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="extract-utilities" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.538263 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="extract-utilities" Mar 14 10:09:49 crc kubenswrapper[4869]: E0314 10:09:49.538310 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="registry-server" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.538323 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="registry-server" Mar 14 10:09:49 crc kubenswrapper[4869]: E0314 10:09:49.538381 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="extract-content" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.538398 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="extract-content" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.538798 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e1be64-724f-490f-85cc-6fe7d5dfd7ca" containerName="registry-server" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.541273 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.548061 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.663326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.663426 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6dz\" (UniqueName: \"kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.663647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.766578 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.766669 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6dz\" (UniqueName: \"kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.766725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.767375 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.767436 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:49 crc kubenswrapper[4869]: I0314 10:09:49.896095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6dz\" (UniqueName: \"kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz\") pod \"community-operators-tzcd6\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:50 crc kubenswrapper[4869]: I0314 10:09:50.182312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:09:50 crc kubenswrapper[4869]: I0314 10:09:50.639186 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:09:50 crc kubenswrapper[4869]: W0314 10:09:50.652732 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5b6e765_706b_4c1c_88d4_5c8a83f027a0.slice/crio-bd193f04d5a541befded25c33173fcb8b05916a690a928ad16817f7687daedee WatchSource:0}: Error finding container bd193f04d5a541befded25c33173fcb8b05916a690a928ad16817f7687daedee: Status 404 returned error can't find the container with id bd193f04d5a541befded25c33173fcb8b05916a690a928ad16817f7687daedee Mar 14 10:09:50 crc kubenswrapper[4869]: I0314 10:09:50.704144 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:09:50 crc kubenswrapper[4869]: E0314 10:09:50.704646 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:09:51 crc kubenswrapper[4869]: I0314 10:09:51.044068 4869 generic.go:334] "Generic (PLEG): container finished" podID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerID="739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa" exitCode=0 Mar 14 10:09:51 crc kubenswrapper[4869]: I0314 10:09:51.044132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerDied","Data":"739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa"} Mar 14 10:09:51 crc kubenswrapper[4869]: I0314 10:09:51.044174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerStarted","Data":"bd193f04d5a541befded25c33173fcb8b05916a690a928ad16817f7687daedee"} Mar 14 10:09:52 crc kubenswrapper[4869]: I0314 10:09:52.056154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerStarted","Data":"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639"} Mar 14 10:09:53 crc kubenswrapper[4869]: I0314 10:09:53.066280 4869 generic.go:334] "Generic (PLEG): container finished" podID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerID="3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639" exitCode=0 Mar 14 10:09:53 crc kubenswrapper[4869]: I0314 10:09:53.066355 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerDied","Data":"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639"} Mar 14 10:09:54 crc kubenswrapper[4869]: I0314 10:09:54.079251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerStarted","Data":"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb"} Mar 14 10:09:54 crc kubenswrapper[4869]: I0314 10:09:54.101503 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tzcd6" podStartSLOduration=2.595737627 podStartE2EDuration="5.101476732s" podCreationTimestamp="2026-03-14 10:09:49 +0000 UTC" firstStartedPulling="2026-03-14 10:09:51.049571253 +0000 UTC m=+4344.021853316" lastFinishedPulling="2026-03-14 10:09:53.555310328 +0000 UTC m=+4346.527592421" observedRunningTime="2026-03-14 10:09:54.097179147 +0000 UTC m=+4347.069461250" watchObservedRunningTime="2026-03-14 10:09:54.101476732 +0000 UTC m=+4347.073758795" Mar 14 10:09:55 crc kubenswrapper[4869]: I0314 10:09:55.704246 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:09:55 crc kubenswrapper[4869]: E0314 10:09:55.704817 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.182941 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.183718 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558050-k8c98"] Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.187666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.187828 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.192796 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.193405 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.194051 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.203920 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-k8c98"] Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.205741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcg4\" (UniqueName: \"kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4\") pod \"auto-csr-approver-29558050-k8c98\" (UID: \"834e2d14-3c26-4530-8024-ca04f292390c\") " pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.253699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.307574 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcg4\" (UniqueName: \"kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4\") pod \"auto-csr-approver-29558050-k8c98\" (UID: \"834e2d14-3c26-4530-8024-ca04f292390c\") " pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.327941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcg4\" (UniqueName: \"kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4\") pod \"auto-csr-approver-29558050-k8c98\" (UID: \"834e2d14-3c26-4530-8024-ca04f292390c\") " pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:00 crc kubenswrapper[4869]: I0314 10:10:00.517429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:01 crc kubenswrapper[4869]: I0314 10:10:01.020061 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-k8c98"] Mar 14 10:10:01 crc kubenswrapper[4869]: I0314 10:10:01.270668 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:01 crc kubenswrapper[4869]: I0314 10:10:01.323635 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:10:02 crc kubenswrapper[4869]: I0314 10:10:02.166454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-k8c98" event={"ID":"834e2d14-3c26-4530-8024-ca04f292390c","Type":"ContainerStarted","Data":"7888ab4cb85fdd254e9b7d8010df040e344a7300716aa48ba040ef7c71d54684"} Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.184210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-k8c98" event={"ID":"834e2d14-3c26-4530-8024-ca04f292390c","Type":"ContainerStarted","Data":"b22d8162ad7d4a0f62b51b0a4e4bf8979339b61214d3e01765d4ae4cd7deaed7"} Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.184347 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tzcd6" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="registry-server" containerID="cri-o://ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb" gracePeriod=2 Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.210994 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558050-k8c98" podStartSLOduration=2.00730359 podStartE2EDuration="3.210967625s" podCreationTimestamp="2026-03-14 10:10:00 +0000 UTC" firstStartedPulling="2026-03-14 10:10:01.206939806 +0000 UTC m=+4354.179221869" lastFinishedPulling="2026-03-14 10:10:02.410603841 +0000 UTC m=+4355.382885904" observedRunningTime="2026-03-14 10:10:03.204827335 +0000 UTC m=+4356.177109458" watchObservedRunningTime="2026-03-14 10:10:03.210967625 +0000 UTC m=+4356.183249718" Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.869067 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.990751 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6dz\" (UniqueName: \"kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz\") pod \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.991642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content\") pod \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.991868 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities\") pod \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\" (UID: \"f5b6e765-706b-4c1c-88d4-5c8a83f027a0\") " Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.992789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities" (OuterVolumeSpecName: "utilities") pod "f5b6e765-706b-4c1c-88d4-5c8a83f027a0" (UID: "f5b6e765-706b-4c1c-88d4-5c8a83f027a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.993246 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:10:03 crc kubenswrapper[4869]: I0314 10:10:03.996626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz" (OuterVolumeSpecName: "kube-api-access-9z6dz") pod "f5b6e765-706b-4c1c-88d4-5c8a83f027a0" (UID: "f5b6e765-706b-4c1c-88d4-5c8a83f027a0"). InnerVolumeSpecName "kube-api-access-9z6dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.039827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5b6e765-706b-4c1c-88d4-5c8a83f027a0" (UID: "f5b6e765-706b-4c1c-88d4-5c8a83f027a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.095767 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.095807 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6dz\" (UniqueName: \"kubernetes.io/projected/f5b6e765-706b-4c1c-88d4-5c8a83f027a0-kube-api-access-9z6dz\") on node \"crc\" DevicePath \"\"" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.202074 4869 generic.go:334] "Generic (PLEG): container finished" podID="834e2d14-3c26-4530-8024-ca04f292390c" containerID="b22d8162ad7d4a0f62b51b0a4e4bf8979339b61214d3e01765d4ae4cd7deaed7" exitCode=0 Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.202136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-k8c98" event={"ID":"834e2d14-3c26-4530-8024-ca04f292390c","Type":"ContainerDied","Data":"b22d8162ad7d4a0f62b51b0a4e4bf8979339b61214d3e01765d4ae4cd7deaed7"} Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.206984 4869 generic.go:334] "Generic (PLEG): container finished" podID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerID="ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb" exitCode=0 Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.207157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerDied","Data":"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb"} Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.207314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzcd6" event={"ID":"f5b6e765-706b-4c1c-88d4-5c8a83f027a0","Type":"ContainerDied","Data":"bd193f04d5a541befded25c33173fcb8b05916a690a928ad16817f7687daedee"} Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.207369 4869 scope.go:117] "RemoveContainer" containerID="ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.207876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzcd6" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.258037 4869 scope.go:117] "RemoveContainer" containerID="3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.275370 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.284765 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tzcd6"] Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.304733 4869 scope.go:117] "RemoveContainer" containerID="739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.366272 4869 scope.go:117] "RemoveContainer" containerID="ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb" Mar 14 10:10:04 crc kubenswrapper[4869]: E0314 10:10:04.366690 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb\": container with ID starting with ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb not found: ID does not exist" containerID="ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.366765 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb"} err="failed to get container status \"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb\": rpc error: code = NotFound desc = could not find container \"ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb\": container with ID starting with ab2f3ea72558a746c074b9cb6e3f838ecb4a86b3ebb510dd0b290195a8d391eb not found: ID does not exist" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.366813 4869 scope.go:117] "RemoveContainer" containerID="3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639" Mar 14 10:10:04 crc kubenswrapper[4869]: E0314 10:10:04.367327 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639\": container with ID starting with 3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639 not found: ID does not exist" containerID="3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.367353 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639"} err="failed to get container status \"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639\": rpc error: code = NotFound desc = could not find container \"3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639\": container with ID starting with 3733ade3c22112789034d7a9fec7b42b9f17ca770ad3ef936778e843c0e5a639 not found: ID does not exist" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.367376 4869 scope.go:117] "RemoveContainer" containerID="739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa" Mar 14 10:10:04 crc kubenswrapper[4869]: E0314 10:10:04.367655 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa\": container with ID starting with 739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa not found: ID does not exist" containerID="739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa" Mar 14 10:10:04 crc kubenswrapper[4869]: I0314 10:10:04.367698 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa"} err="failed to get container status \"739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa\": rpc error: code = NotFound desc = could not find container \"739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa\": container with ID starting with 739fc2e7fa72a9a024be8646b62f4ddd2c2e50c0b6d5ca203f67e0c80ed318fa not found: ID does not exist" Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.638680 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.706138 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:10:05 crc kubenswrapper[4869]: E0314 10:10:05.706420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.721748 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" path="/var/lib/kubelet/pods/f5b6e765-706b-4c1c-88d4-5c8a83f027a0/volumes" Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.728489 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcg4\" (UniqueName: \"kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4\") pod \"834e2d14-3c26-4530-8024-ca04f292390c\" (UID: \"834e2d14-3c26-4530-8024-ca04f292390c\") " Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.755217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4" (OuterVolumeSpecName: "kube-api-access-xjcg4") pod "834e2d14-3c26-4530-8024-ca04f292390c" (UID: "834e2d14-3c26-4530-8024-ca04f292390c"). InnerVolumeSpecName "kube-api-access-xjcg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:10:05 crc kubenswrapper[4869]: I0314 10:10:05.831322 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcg4\" (UniqueName: \"kubernetes.io/projected/834e2d14-3c26-4530-8024-ca04f292390c-kube-api-access-xjcg4\") on node \"crc\" DevicePath \"\"" Mar 14 10:10:06 crc kubenswrapper[4869]: I0314 10:10:06.234122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558050-k8c98" event={"ID":"834e2d14-3c26-4530-8024-ca04f292390c","Type":"ContainerDied","Data":"7888ab4cb85fdd254e9b7d8010df040e344a7300716aa48ba040ef7c71d54684"} Mar 14 10:10:06 crc kubenswrapper[4869]: I0314 10:10:06.234474 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7888ab4cb85fdd254e9b7d8010df040e344a7300716aa48ba040ef7c71d54684" Mar 14 10:10:06 crc kubenswrapper[4869]: I0314 10:10:06.234439 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558050-k8c98" Mar 14 10:10:06 crc kubenswrapper[4869]: I0314 10:10:06.308904 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-jzltw"] Mar 14 10:10:06 crc kubenswrapper[4869]: I0314 10:10:06.317394 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558044-jzltw"] Mar 14 10:10:07 crc kubenswrapper[4869]: I0314 10:10:07.704151 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:10:07 crc kubenswrapper[4869]: E0314 10:10:07.704594 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:10:07 crc kubenswrapper[4869]: I0314 10:10:07.714922 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab" path="/var/lib/kubelet/pods/bb3ca2c8-8e60-439a-93aa-6c9c3d9678ab/volumes" Mar 14 10:10:08 crc kubenswrapper[4869]: I0314 10:10:08.301117 4869 scope.go:117] "RemoveContainer" containerID="5768e85ea9639607f1ce9f04b0099bf155d09ce3455cff7166ce38af434df858" Mar 14 10:10:09 crc kubenswrapper[4869]: I0314 10:10:09.605476 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:10:09 crc kubenswrapper[4869]: I0314 10:10:09.605910 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:10:19 crc kubenswrapper[4869]: I0314 10:10:19.704645 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:10:19 crc kubenswrapper[4869]: E0314 10:10:19.705394 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:10:21 crc kubenswrapper[4869]: I0314 10:10:21.704635 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:10:21 crc kubenswrapper[4869]: E0314 10:10:21.705468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:10:30 crc kubenswrapper[4869]: I0314 10:10:30.704652 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:10:30 crc kubenswrapper[4869]: E0314 10:10:30.705722 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:10:33 crc kubenswrapper[4869]: I0314 10:10:33.704333 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:10:33 crc kubenswrapper[4869]: E0314 10:10:33.705120 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:10:39 crc kubenswrapper[4869]: I0314 10:10:39.605830 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:10:39 crc kubenswrapper[4869]: I0314 10:10:39.606789 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:10:39 crc kubenswrapper[4869]: I0314 10:10:39.606870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 10:10:39 crc kubenswrapper[4869]: I0314 10:10:39.608253 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:10:39 crc kubenswrapper[4869]: I0314 10:10:39.608390 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" gracePeriod=600 Mar 14 10:10:39 crc kubenswrapper[4869]: E0314 10:10:39.750769 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:10:40 crc kubenswrapper[4869]: I0314 10:10:40.623501 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" exitCode=0 Mar 14 10:10:40 crc kubenswrapper[4869]: I0314 10:10:40.624064 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400"} Mar 14 10:10:40 crc kubenswrapper[4869]: I0314 10:10:40.624194 4869 scope.go:117] "RemoveContainer" containerID="c5b092dc662f0b0b3f6f0a1cb7a1e81ef10559c1c3e51ca6b224aa3604bd8a08" Mar 14 10:10:40 crc kubenswrapper[4869]: I0314 10:10:40.626025 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:10:40 crc kubenswrapper[4869]: E0314 10:10:40.627492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:10:42 crc kubenswrapper[4869]: I0314 10:10:42.705061 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:10:42 crc kubenswrapper[4869]: E0314 10:10:42.705860 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:10:46 crc kubenswrapper[4869]: I0314 10:10:46.704273 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:10:46 crc kubenswrapper[4869]: E0314 10:10:46.705281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:10:53 crc kubenswrapper[4869]: I0314 10:10:53.703703 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:10:53 crc kubenswrapper[4869]: I0314 10:10:53.704682 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:10:53 crc kubenswrapper[4869]: E0314 10:10:53.704839 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:10:53 crc kubenswrapper[4869]: E0314 10:10:53.705036 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:10:58 crc kubenswrapper[4869]: I0314 10:10:58.703942 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:10:58 crc kubenswrapper[4869]: E0314 10:10:58.705260 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:11:05 crc kubenswrapper[4869]: I0314 10:11:05.704497 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:11:05 crc kubenswrapper[4869]: I0314 10:11:05.705431 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:11:05 crc kubenswrapper[4869]: E0314 10:11:05.705563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:11:05 crc kubenswrapper[4869]: E0314 10:11:05.706060 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:11:11 crc kubenswrapper[4869]: I0314 10:11:11.704323 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:11:11 crc kubenswrapper[4869]: E0314 10:11:11.705296 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:11:17 crc kubenswrapper[4869]: I0314 10:11:17.719495 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:11:17 crc kubenswrapper[4869]: E0314 10:11:17.720969 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:11:18 crc kubenswrapper[4869]: I0314 10:11:18.704578 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:11:18 crc kubenswrapper[4869]: E0314 10:11:18.705032 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:11:26 crc kubenswrapper[4869]: I0314 10:11:26.704293 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:11:26 crc kubenswrapper[4869]: E0314 10:11:26.705235 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:11:28 crc kubenswrapper[4869]: I0314 10:11:28.704121 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:11:28 crc kubenswrapper[4869]: E0314 10:11:28.704917 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:11:33 crc kubenswrapper[4869]: I0314 10:11:33.703701 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:11:33 crc kubenswrapper[4869]: E0314 10:11:33.704611 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:11:37 crc kubenswrapper[4869]: I0314 10:11:37.712879 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:11:37 crc kubenswrapper[4869]: E0314 10:11:37.713382 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:11:42 crc kubenswrapper[4869]: I0314 10:11:42.704388 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:11:42 crc kubenswrapper[4869]: E0314 10:11:42.705327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:11:47 crc kubenswrapper[4869]: I0314 10:11:47.716764 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:11:47 crc kubenswrapper[4869]: E0314 10:11:47.717748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:11:48 crc kubenswrapper[4869]: I0314 10:11:48.704449 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:11:48 crc kubenswrapper[4869]: E0314 10:11:48.705106 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:11:50 crc kubenswrapper[4869]: I0314 10:11:50.995240 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="2b16088c-48ba-4c09-91b1-a0447bced81b" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:11:50 crc kubenswrapper[4869]: I0314 10:11:50.997777 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="2b16088c-48ba-4c09-91b1-a0447bced81b" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:11:53 crc kubenswrapper[4869]: I0314 10:11:53.703978 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:11:53 crc kubenswrapper[4869]: E0314 10:11:53.704988 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:11:59 crc kubenswrapper[4869]: I0314 10:11:59.705729 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:11:59 crc kubenswrapper[4869]: E0314 10:11:59.706709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.160063 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558052-66gpb"] Mar 14 10:12:00 crc kubenswrapper[4869]: E0314 10:12:00.160729 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="extract-content" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.160810 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="extract-content" Mar 14 10:12:00 crc kubenswrapper[4869]: E0314 10:12:00.160824 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="extract-utilities" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.160833 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="extract-utilities" Mar 14 10:12:00 crc kubenswrapper[4869]: E0314 10:12:00.160872 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834e2d14-3c26-4530-8024-ca04f292390c" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.160884 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="834e2d14-3c26-4530-8024-ca04f292390c" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4869]: E0314 10:12:00.160903 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="registry-server" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.160910 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="registry-server" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.161166 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="834e2d14-3c26-4530-8024-ca04f292390c" containerName="oc" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.161190 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b6e765-706b-4c1c-88d4-5c8a83f027a0" containerName="registry-server" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.163386 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.166275 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.166515 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.166969 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.177394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-66gpb"] Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.244986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5t2k\" (UniqueName: \"kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k\") pod \"auto-csr-approver-29558052-66gpb\" (UID: \"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5\") " pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.347369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5t2k\" (UniqueName: \"kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k\") pod \"auto-csr-approver-29558052-66gpb\" (UID: \"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5\") " pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.365758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5t2k\" (UniqueName: \"kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k\") pod \"auto-csr-approver-29558052-66gpb\" (UID: \"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5\") " pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.505310 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:00 crc kubenswrapper[4869]: I0314 10:12:00.989381 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-66gpb"] Mar 14 10:12:01 crc kubenswrapper[4869]: I0314 10:12:01.008549 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:12:01 crc kubenswrapper[4869]: I0314 10:12:01.704936 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:12:01 crc kubenswrapper[4869]: E0314 10:12:01.705823 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:12:01 crc kubenswrapper[4869]: I0314 10:12:01.919550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-66gpb" event={"ID":"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5","Type":"ContainerStarted","Data":"85858a5f134cf653794f3d0b6cbda050f7157816e98cfec162a834b407095221"} Mar 14 10:12:02 crc kubenswrapper[4869]: I0314 10:12:02.932258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-66gpb" event={"ID":"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5","Type":"ContainerStarted","Data":"29acf5fb14c6fd88a2d8d211c3e748819ab58c6ee95c4dca7e55119b5abeeaef"} Mar 14 10:12:03 crc kubenswrapper[4869]: I0314 10:12:03.940871 4869 generic.go:334] "Generic (PLEG): container finished" podID="ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" containerID="29acf5fb14c6fd88a2d8d211c3e748819ab58c6ee95c4dca7e55119b5abeeaef" exitCode=0 Mar 14 10:12:03 crc kubenswrapper[4869]: I0314 10:12:03.940912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-66gpb" event={"ID":"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5","Type":"ContainerDied","Data":"29acf5fb14c6fd88a2d8d211c3e748819ab58c6ee95c4dca7e55119b5abeeaef"} Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.309713 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.348259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5t2k\" (UniqueName: \"kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k\") pod \"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5\" (UID: \"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5\") " Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.355239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k" (OuterVolumeSpecName: "kube-api-access-r5t2k") pod "ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" (UID: "ccaafb2e-9b8b-4109-8d5b-cc37094a36f5"). InnerVolumeSpecName "kube-api-access-r5t2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.450798 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5t2k\" (UniqueName: \"kubernetes.io/projected/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5-kube-api-access-r5t2k\") on node \"crc\" DevicePath \"\"" Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.704391 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:12:05 crc kubenswrapper[4869]: E0314 10:12:05.704834 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.960712 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558052-66gpb" event={"ID":"ccaafb2e-9b8b-4109-8d5b-cc37094a36f5","Type":"ContainerDied","Data":"85858a5f134cf653794f3d0b6cbda050f7157816e98cfec162a834b407095221"} Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.960754 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558052-66gpb" Mar 14 10:12:05 crc kubenswrapper[4869]: I0314 10:12:05.960766 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85858a5f134cf653794f3d0b6cbda050f7157816e98cfec162a834b407095221" Mar 14 10:12:06 crc kubenswrapper[4869]: I0314 10:12:06.032866 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-ff8np"] Mar 14 10:12:06 crc kubenswrapper[4869]: I0314 10:12:06.041281 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558046-ff8np"] Mar 14 10:12:07 crc kubenswrapper[4869]: I0314 10:12:07.724509 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49e262e-0287-4565-9886-6bdec491d7a9" path="/var/lib/kubelet/pods/b49e262e-0287-4565-9886-6bdec491d7a9/volumes" Mar 14 10:12:08 crc kubenswrapper[4869]: I0314 10:12:08.462613 4869 scope.go:117] "RemoveContainer" containerID="6172f4986f073bbe959bc78263dfcdc14ac2487be8c6e71ef7e0a830916d6869" Mar 14 10:12:11 crc kubenswrapper[4869]: I0314 10:12:11.704545 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:12:11 crc kubenswrapper[4869]: E0314 10:12:11.705203 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:12:16 crc kubenswrapper[4869]: I0314 10:12:16.703615 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:12:16 crc kubenswrapper[4869]: E0314 10:12:16.704596 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:12:17 crc kubenswrapper[4869]: I0314 10:12:17.714415 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:12:17 crc kubenswrapper[4869]: E0314 10:12:17.715394 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:12:22 crc kubenswrapper[4869]: I0314 10:12:22.704207 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:12:22 crc kubenswrapper[4869]: E0314 10:12:22.705075 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:12:29 crc kubenswrapper[4869]: I0314 10:12:29.703935 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:12:29 crc kubenswrapper[4869]: E0314 10:12:29.704776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:12:30 crc kubenswrapper[4869]: I0314 10:12:30.704318 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:12:30 crc kubenswrapper[4869]: E0314 10:12:30.705119 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:12:36 crc kubenswrapper[4869]: I0314 10:12:36.706178 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:12:36 crc kubenswrapper[4869]: E0314 10:12:36.707462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:12:42 crc kubenswrapper[4869]: I0314 10:12:42.704202 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:12:42 crc kubenswrapper[4869]: E0314 10:12:42.705101 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:12:45 crc kubenswrapper[4869]: I0314 10:12:45.704841 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:12:45 crc kubenswrapper[4869]: E0314 10:12:45.705325 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:12:51 crc kubenswrapper[4869]: I0314 10:12:51.704412 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:12:51 crc kubenswrapper[4869]: E0314 10:12:51.705605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:12:55 crc kubenswrapper[4869]: I0314 10:12:55.704593 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:12:55 crc kubenswrapper[4869]: E0314 10:12:55.705628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:12:59 crc kubenswrapper[4869]: I0314 10:12:59.703916 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:12:59 crc kubenswrapper[4869]: E0314 10:12:59.704782 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:13:02 crc kubenswrapper[4869]: I0314 10:13:02.705201 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:13:02 crc kubenswrapper[4869]: E0314 10:13:02.706053 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:13:07 crc kubenswrapper[4869]: I0314 10:13:07.711731 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:13:07 crc kubenswrapper[4869]: E0314 10:13:07.712638 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:13:12 crc kubenswrapper[4869]: I0314 10:13:12.705162 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:13:12 crc kubenswrapper[4869]: E0314 10:13:12.706018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:13:15 crc kubenswrapper[4869]: I0314 10:13:15.708488 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:13:15 crc kubenswrapper[4869]: E0314 10:13:15.711245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:13:20 crc kubenswrapper[4869]: I0314 10:13:20.705403 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:13:20 crc kubenswrapper[4869]: E0314 10:13:20.706698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:13:24 crc kubenswrapper[4869]: I0314 10:13:24.704577 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:13:24 crc kubenswrapper[4869]: E0314 10:13:24.705641 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:13:27 crc kubenswrapper[4869]: I0314 10:13:27.715906 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:13:28 crc kubenswrapper[4869]: I0314 10:13:28.855077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a"} Mar 14 10:13:34 crc kubenswrapper[4869]: I0314 10:13:34.539615 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:13:34 crc kubenswrapper[4869]: I0314 10:13:34.540632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:13:35 crc kubenswrapper[4869]: I0314 10:13:35.704868 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:13:35 crc kubenswrapper[4869]: E0314 10:13:35.705304 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:13:36 crc kubenswrapper[4869]: I0314 10:13:36.928906 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" exitCode=1 Mar 14 10:13:36 crc kubenswrapper[4869]: I0314 10:13:36.929229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a"} Mar 14 10:13:36 crc kubenswrapper[4869]: I0314 10:13:36.929266 4869 scope.go:117] "RemoveContainer" containerID="402d6e34e1972a1fa2a417169a4e41dcaa06c274e57e7b3fdc7c22177fa0dfe0" Mar 14 10:13:36 crc kubenswrapper[4869]: I0314 10:13:36.930115 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:13:36 crc kubenswrapper[4869]: E0314 10:13:36.930472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:13:39 crc kubenswrapper[4869]: I0314 10:13:39.705956 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:13:39 crc kubenswrapper[4869]: E0314 10:13:39.706726 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:13:44 crc kubenswrapper[4869]: I0314 10:13:44.538929 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:13:44 crc kubenswrapper[4869]: I0314 10:13:44.539460 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:13:44 crc kubenswrapper[4869]: I0314 10:13:44.540327 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:13:44 crc kubenswrapper[4869]: E0314 10:13:44.540652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:13:49 crc kubenswrapper[4869]: I0314 10:13:49.707236 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:13:49 crc kubenswrapper[4869]: E0314 10:13:49.708678 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:13:50 crc kubenswrapper[4869]: I0314 10:13:50.703772 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:13:50 crc kubenswrapper[4869]: E0314 10:13:50.704570 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:13:55 crc kubenswrapper[4869]: I0314 10:13:55.705046 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:13:55 crc kubenswrapper[4869]: E0314 10:13:55.705982 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.142055 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558054-sbbk8"] Mar 14 10:14:00 crc kubenswrapper[4869]: E0314 10:14:00.143016 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.143031 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.143266 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" containerName="oc" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.144032 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.146605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.146967 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.146966 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.151613 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-sbbk8"] Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.290121 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvps8\" (UniqueName: \"kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8\") pod \"auto-csr-approver-29558054-sbbk8\" (UID: \"eb1c3c30-9ad4-43c9-bafa-e559384d56c9\") " pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.392150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvps8\" (UniqueName: \"kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8\") pod \"auto-csr-approver-29558054-sbbk8\" (UID: \"eb1c3c30-9ad4-43c9-bafa-e559384d56c9\") " pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.595812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvps8\" (UniqueName: \"kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8\") pod \"auto-csr-approver-29558054-sbbk8\" (UID: \"eb1c3c30-9ad4-43c9-bafa-e559384d56c9\") " pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.704010 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:14:00 crc kubenswrapper[4869]: I0314 10:14:00.764542 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:01 crc kubenswrapper[4869]: I0314 10:14:01.213690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea"} Mar 14 10:14:01 crc kubenswrapper[4869]: I0314 10:14:01.291201 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-sbbk8"] Mar 14 10:14:02 crc kubenswrapper[4869]: I0314 10:14:02.244622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" event={"ID":"eb1c3c30-9ad4-43c9-bafa-e559384d56c9","Type":"ContainerStarted","Data":"d9b120073a0307359f31bca5ea63b6fcd180a155548057bacc0525dd322617f7"} Mar 14 10:14:03 crc kubenswrapper[4869]: I0314 10:14:03.255891 4869 generic.go:334] "Generic (PLEG): container finished" podID="eb1c3c30-9ad4-43c9-bafa-e559384d56c9" containerID="f376515417a4ac6dd70bd63ce832baad3e9033f85efd569119684eb773348b01" exitCode=0 Mar 14 10:14:03 crc kubenswrapper[4869]: I0314 10:14:03.256006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" event={"ID":"eb1c3c30-9ad4-43c9-bafa-e559384d56c9","Type":"ContainerDied","Data":"f376515417a4ac6dd70bd63ce832baad3e9033f85efd569119684eb773348b01"} Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.404499 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.404844 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.746927 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.789373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvps8\" (UniqueName: \"kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8\") pod \"eb1c3c30-9ad4-43c9-bafa-e559384d56c9\" (UID: \"eb1c3c30-9ad4-43c9-bafa-e559384d56c9\") " Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.810751 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8" (OuterVolumeSpecName: "kube-api-access-mvps8") pod "eb1c3c30-9ad4-43c9-bafa-e559384d56c9" (UID: "eb1c3c30-9ad4-43c9-bafa-e559384d56c9"). InnerVolumeSpecName "kube-api-access-mvps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:14:04 crc kubenswrapper[4869]: I0314 10:14:04.892601 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvps8\" (UniqueName: \"kubernetes.io/projected/eb1c3c30-9ad4-43c9-bafa-e559384d56c9-kube-api-access-mvps8\") on node \"crc\" DevicePath \"\"" Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.279081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" event={"ID":"eb1c3c30-9ad4-43c9-bafa-e559384d56c9","Type":"ContainerDied","Data":"d9b120073a0307359f31bca5ea63b6fcd180a155548057bacc0525dd322617f7"} Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.279144 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b120073a0307359f31bca5ea63b6fcd180a155548057bacc0525dd322617f7" Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.279227 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558054-sbbk8" Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.703634 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:14:05 crc kubenswrapper[4869]: E0314 10:14:05.704163 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.814519 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-dtwl6"] Mar 14 10:14:05 crc kubenswrapper[4869]: I0314 10:14:05.838542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558048-dtwl6"] Mar 14 10:14:07 crc kubenswrapper[4869]: I0314 10:14:07.730813 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9643d2fb-06e0-45f7-94e4-12219ba833a7" path="/var/lib/kubelet/pods/9643d2fb-06e0-45f7-94e4-12219ba833a7/volumes" Mar 14 10:14:08 crc kubenswrapper[4869]: I0314 10:14:08.610584 4869 scope.go:117] "RemoveContainer" containerID="f64325bae5547e1d0f9db7227f11eefbb5f144662229d7609bec285ed77bd1c6" Mar 14 10:14:09 crc kubenswrapper[4869]: I0314 10:14:09.325122 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" exitCode=1 Mar 14 10:14:09 crc kubenswrapper[4869]: I0314 10:14:09.325212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea"} Mar 14 10:14:09 crc kubenswrapper[4869]: I0314 10:14:09.325556 4869 scope.go:117] "RemoveContainer" containerID="e5b790173034700a48ce1268bab4604f94255a6ba4d470044facd203c08d5d5e" Mar 14 10:14:09 crc kubenswrapper[4869]: I0314 10:14:09.326458 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:09 crc kubenswrapper[4869]: E0314 10:14:09.326782 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:14:10 crc kubenswrapper[4869]: I0314 10:14:10.704485 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:14:10 crc kubenswrapper[4869]: E0314 10:14:10.705374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:14:14 crc kubenswrapper[4869]: I0314 10:14:14.404324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:14:14 crc kubenswrapper[4869]: I0314 10:14:14.405349 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:14 crc kubenswrapper[4869]: E0314 10:14:14.405673 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:14:14 crc kubenswrapper[4869]: I0314 10:14:14.406534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:14:15 crc kubenswrapper[4869]: I0314 10:14:15.420837 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:15 crc kubenswrapper[4869]: E0314 10:14:15.421630 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:14:16 crc kubenswrapper[4869]: I0314 10:14:16.704770 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:14:16 crc kubenswrapper[4869]: E0314 10:14:16.705654 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:14:25 crc kubenswrapper[4869]: I0314 10:14:25.708678 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:14:25 crc kubenswrapper[4869]: E0314 10:14:25.709888 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:14:28 crc kubenswrapper[4869]: I0314 10:14:28.704931 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:14:28 crc kubenswrapper[4869]: E0314 10:14:28.706193 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:14:30 crc kubenswrapper[4869]: I0314 10:14:30.704799 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:30 crc kubenswrapper[4869]: E0314 10:14:30.705592 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:14:36 crc kubenswrapper[4869]: I0314 10:14:36.704204 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:14:36 crc kubenswrapper[4869]: E0314 10:14:36.705016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:14:41 crc kubenswrapper[4869]: I0314 10:14:41.707937 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:41 crc kubenswrapper[4869]: I0314 10:14:41.708789 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:14:41 crc kubenswrapper[4869]: E0314 10:14:41.709174 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:14:41 crc kubenswrapper[4869]: E0314 10:14:41.709789 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:14:50 crc kubenswrapper[4869]: I0314 10:14:50.704148 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:14:50 crc kubenswrapper[4869]: E0314 10:14:50.705366 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:14:54 crc kubenswrapper[4869]: I0314 10:14:54.704660 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:14:54 crc kubenswrapper[4869]: E0314 10:14:54.705861 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:14:55 crc kubenswrapper[4869]: I0314 10:14:55.705401 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:14:55 crc kubenswrapper[4869]: E0314 10:14:55.705760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.172705 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2"] Mar 14 10:15:00 crc kubenswrapper[4869]: E0314 10:15:00.174337 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1c3c30-9ad4-43c9-bafa-e559384d56c9" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.174367 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1c3c30-9ad4-43c9-bafa-e559384d56c9" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.174854 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1c3c30-9ad4-43c9-bafa-e559384d56c9" containerName="oc" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.176246 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.180713 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.180892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.192800 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2"] Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.203217 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.203324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.203371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhkm\" (UniqueName: \"kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.305816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.305936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.305976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhkm\" (UniqueName: \"kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.307392 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.315492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.330172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhkm\" (UniqueName: \"kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm\") pod \"collect-profiles-29558055-c49j2\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:00 crc kubenswrapper[4869]: I0314 10:15:00.512495 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:01 crc kubenswrapper[4869]: I0314 10:15:01.006374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2"] Mar 14 10:15:01 crc kubenswrapper[4869]: I0314 10:15:01.704966 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:15:01 crc kubenswrapper[4869]: E0314 10:15:01.706474 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:15:01 crc kubenswrapper[4869]: I0314 10:15:01.973916 4869 generic.go:334] "Generic (PLEG): container finished" podID="237c0599-46f5-4f5e-995c-336b5716fe62" containerID="5003511af22a472fbb97ce8e470a83c6785303c017b3dc90e84f8a021a8939bf" exitCode=0 Mar 14 10:15:01 crc kubenswrapper[4869]: I0314 10:15:01.973989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" event={"ID":"237c0599-46f5-4f5e-995c-336b5716fe62","Type":"ContainerDied","Data":"5003511af22a472fbb97ce8e470a83c6785303c017b3dc90e84f8a021a8939bf"} Mar 14 10:15:01 crc kubenswrapper[4869]: I0314 10:15:01.974037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" event={"ID":"237c0599-46f5-4f5e-995c-336b5716fe62","Type":"ContainerStarted","Data":"e9e46c21c9657b9fdf5242cf1e9cb52169a225c47edf4247a5bdf784372db9d6"} Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.466267 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.584838 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhkm\" (UniqueName: \"kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm\") pod \"237c0599-46f5-4f5e-995c-336b5716fe62\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.585151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume\") pod \"237c0599-46f5-4f5e-995c-336b5716fe62\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.585253 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume\") pod \"237c0599-46f5-4f5e-995c-336b5716fe62\" (UID: \"237c0599-46f5-4f5e-995c-336b5716fe62\") " Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.585928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume" (OuterVolumeSpecName: "config-volume") pod "237c0599-46f5-4f5e-995c-336b5716fe62" (UID: "237c0599-46f5-4f5e-995c-336b5716fe62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.595202 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "237c0599-46f5-4f5e-995c-336b5716fe62" (UID: "237c0599-46f5-4f5e-995c-336b5716fe62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.595889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm" (OuterVolumeSpecName: "kube-api-access-wrhkm") pod "237c0599-46f5-4f5e-995c-336b5716fe62" (UID: "237c0599-46f5-4f5e-995c-336b5716fe62"). InnerVolumeSpecName "kube-api-access-wrhkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.688209 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/237c0599-46f5-4f5e-995c-336b5716fe62-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.688264 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrhkm\" (UniqueName: \"kubernetes.io/projected/237c0599-46f5-4f5e-995c-336b5716fe62-kube-api-access-wrhkm\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:03 crc kubenswrapper[4869]: I0314 10:15:03.688283 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237c0599-46f5-4f5e-995c-336b5716fe62-config-volume\") on node \"crc\" DevicePath \"\"" Mar 14 10:15:04 crc kubenswrapper[4869]: I0314 10:15:04.000110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" event={"ID":"237c0599-46f5-4f5e-995c-336b5716fe62","Type":"ContainerDied","Data":"e9e46c21c9657b9fdf5242cf1e9cb52169a225c47edf4247a5bdf784372db9d6"} Mar 14 10:15:04 crc kubenswrapper[4869]: I0314 10:15:04.000178 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e46c21c9657b9fdf5242cf1e9cb52169a225c47edf4247a5bdf784372db9d6" Mar 14 10:15:04 crc kubenswrapper[4869]: I0314 10:15:04.000261 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29558055-c49j2" Mar 14 10:15:04 crc kubenswrapper[4869]: I0314 10:15:04.610902 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq"] Mar 14 10:15:04 crc kubenswrapper[4869]: I0314 10:15:04.624025 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29558010-p6zmq"] Mar 14 10:15:05 crc kubenswrapper[4869]: I0314 10:15:05.728301 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb653d00-0e56-459f-aef8-976660ca7c22" path="/var/lib/kubelet/pods/eb653d00-0e56-459f-aef8-976660ca7c22/volumes" Mar 14 10:15:06 crc kubenswrapper[4869]: I0314 10:15:06.705258 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:15:06 crc kubenswrapper[4869]: E0314 10:15:06.706313 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:15:08 crc kubenswrapper[4869]: I0314 10:15:08.704004 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:15:08 crc kubenswrapper[4869]: E0314 10:15:08.704761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:15:08 crc kubenswrapper[4869]: I0314 10:15:08.714153 4869 scope.go:117] "RemoveContainer" containerID="d10ebe7185df19b82ea3088f72a7cb37ca9f0c0bc039e6200f28844e31484b53" Mar 14 10:15:14 crc kubenswrapper[4869]: I0314 10:15:14.704630 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:15:14 crc kubenswrapper[4869]: E0314 10:15:14.705392 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:15:18 crc kubenswrapper[4869]: I0314 10:15:18.704496 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:15:18 crc kubenswrapper[4869]: E0314 10:15:18.705593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:15:23 crc kubenswrapper[4869]: I0314 10:15:23.704773 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:15:23 crc kubenswrapper[4869]: E0314 10:15:23.706035 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:15:25 crc kubenswrapper[4869]: I0314 10:15:25.705617 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:15:25 crc kubenswrapper[4869]: E0314 10:15:25.706604 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:15:31 crc kubenswrapper[4869]: I0314 10:15:31.704881 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:15:31 crc kubenswrapper[4869]: E0314 10:15:31.705875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:15:38 crc kubenswrapper[4869]: I0314 10:15:38.704014 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:15:38 crc kubenswrapper[4869]: I0314 10:15:38.704795 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:15:38 crc kubenswrapper[4869]: E0314 10:15:38.705278 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:15:38 crc kubenswrapper[4869]: E0314 10:15:38.705639 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:15:44 crc kubenswrapper[4869]: I0314 10:15:44.705280 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:15:44 crc kubenswrapper[4869]: E0314 10:15:44.706574 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:15:50 crc kubenswrapper[4869]: I0314 10:15:50.706945 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:15:50 crc kubenswrapper[4869]: E0314 10:15:50.707991 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:15:52 crc kubenswrapper[4869]: I0314 10:15:52.703544 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:15:53 crc kubenswrapper[4869]: I0314 10:15:53.612222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba"} Mar 14 10:15:56 crc kubenswrapper[4869]: I0314 10:15:56.704979 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:15:56 crc kubenswrapper[4869]: E0314 10:15:56.705970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.155935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558056-pwpsf"] Mar 14 10:16:00 crc kubenswrapper[4869]: E0314 10:16:00.157292 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237c0599-46f5-4f5e-995c-336b5716fe62" containerName="collect-profiles" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.157320 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="237c0599-46f5-4f5e-995c-336b5716fe62" containerName="collect-profiles" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.157745 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="237c0599-46f5-4f5e-995c-336b5716fe62" containerName="collect-profiles" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.158893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.166388 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.166479 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.166813 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.175631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-pwpsf"] Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.277859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhc5\" (UniqueName: \"kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5\") pod \"auto-csr-approver-29558056-pwpsf\" (UID: \"528d6b75-4067-4f1f-8585-6fe161aea0b4\") " pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.379526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhc5\" (UniqueName: \"kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5\") pod \"auto-csr-approver-29558056-pwpsf\" (UID: \"528d6b75-4067-4f1f-8585-6fe161aea0b4\") " pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.399369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhc5\" (UniqueName: \"kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5\") pod \"auto-csr-approver-29558056-pwpsf\" (UID: \"528d6b75-4067-4f1f-8585-6fe161aea0b4\") " pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.490930 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:00 crc kubenswrapper[4869]: I0314 10:16:00.977015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-pwpsf"] Mar 14 10:16:01 crc kubenswrapper[4869]: I0314 10:16:01.717370 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" event={"ID":"528d6b75-4067-4f1f-8585-6fe161aea0b4","Type":"ContainerStarted","Data":"7856055a833babd4739f51bebfbdcb1c785cf8a408a33604c87e1f227997bbad"} Mar 14 10:16:02 crc kubenswrapper[4869]: I0314 10:16:02.735086 4869 generic.go:334] "Generic (PLEG): container finished" podID="528d6b75-4067-4f1f-8585-6fe161aea0b4" containerID="cb1c90a96870002a60661180175eaf88231319cb9cd659a373076ce4c93e00fa" exitCode=0 Mar 14 10:16:02 crc kubenswrapper[4869]: I0314 10:16:02.735226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" event={"ID":"528d6b75-4067-4f1f-8585-6fe161aea0b4","Type":"ContainerDied","Data":"cb1c90a96870002a60661180175eaf88231319cb9cd659a373076ce4c93e00fa"} Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.191777 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.374244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbhc5\" (UniqueName: \"kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5\") pod \"528d6b75-4067-4f1f-8585-6fe161aea0b4\" (UID: \"528d6b75-4067-4f1f-8585-6fe161aea0b4\") " Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.385416 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5" (OuterVolumeSpecName: "kube-api-access-kbhc5") pod "528d6b75-4067-4f1f-8585-6fe161aea0b4" (UID: "528d6b75-4067-4f1f-8585-6fe161aea0b4"). InnerVolumeSpecName "kube-api-access-kbhc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.477088 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbhc5\" (UniqueName: \"kubernetes.io/projected/528d6b75-4067-4f1f-8585-6fe161aea0b4-kube-api-access-kbhc5\") on node \"crc\" DevicePath \"\"" Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.764629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" event={"ID":"528d6b75-4067-4f1f-8585-6fe161aea0b4","Type":"ContainerDied","Data":"7856055a833babd4739f51bebfbdcb1c785cf8a408a33604c87e1f227997bbad"} Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.764687 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7856055a833babd4739f51bebfbdcb1c785cf8a408a33604c87e1f227997bbad" Mar 14 10:16:04 crc kubenswrapper[4869]: I0314 10:16:04.764697 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558056-pwpsf" Mar 14 10:16:05 crc kubenswrapper[4869]: I0314 10:16:05.294760 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-k8c98"] Mar 14 10:16:05 crc kubenswrapper[4869]: I0314 10:16:05.309410 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558050-k8c98"] Mar 14 10:16:05 crc kubenswrapper[4869]: I0314 10:16:05.707282 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:16:05 crc kubenswrapper[4869]: E0314 10:16:05.707776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:16:05 crc kubenswrapper[4869]: I0314 10:16:05.730981 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834e2d14-3c26-4530-8024-ca04f292390c" path="/var/lib/kubelet/pods/834e2d14-3c26-4530-8024-ca04f292390c/volumes" Mar 14 10:16:08 crc kubenswrapper[4869]: I0314 10:16:08.814466 4869 scope.go:117] "RemoveContainer" containerID="b22d8162ad7d4a0f62b51b0a4e4bf8979339b61214d3e01765d4ae4cd7deaed7" Mar 14 10:16:09 crc kubenswrapper[4869]: I0314 10:16:09.704335 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:16:09 crc kubenswrapper[4869]: E0314 10:16:09.705425 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:16:16 crc kubenswrapper[4869]: I0314 10:16:16.704747 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:16:16 crc kubenswrapper[4869]: E0314 10:16:16.705579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:16:21 crc kubenswrapper[4869]: I0314 10:16:21.706965 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:16:21 crc kubenswrapper[4869]: E0314 10:16:21.709013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.579763 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:27 crc kubenswrapper[4869]: E0314 10:16:27.581719 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528d6b75-4067-4f1f-8585-6fe161aea0b4" containerName="oc" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.581744 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="528d6b75-4067-4f1f-8585-6fe161aea0b4" containerName="oc" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.582327 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="528d6b75-4067-4f1f-8585-6fe161aea0b4" containerName="oc" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.587764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.604745 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.617680 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.617784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qg2n\" (UniqueName: \"kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.617842 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.710639 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:16:27 crc kubenswrapper[4869]: E0314 10:16:27.710901 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.722441 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qg2n\" (UniqueName: \"kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.722543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.722830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.725268 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.725391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.760662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qg2n\" (UniqueName: \"kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n\") pod \"certified-operators-428ll\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:27 crc kubenswrapper[4869]: I0314 10:16:27.940238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:28 crc kubenswrapper[4869]: I0314 10:16:28.308614 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:29 crc kubenswrapper[4869]: I0314 10:16:29.054976 4869 generic.go:334] "Generic (PLEG): container finished" podID="87eccc4c-5854-4f20-ab97-32db879a2846" containerID="e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96" exitCode=0 Mar 14 10:16:29 crc kubenswrapper[4869]: I0314 10:16:29.055502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerDied","Data":"e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96"} Mar 14 10:16:29 crc kubenswrapper[4869]: I0314 10:16:29.055606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerStarted","Data":"8decb51c063df10c4892634010d5542fd9dae27a47cfa52c52b933bd74c070e8"} Mar 14 10:16:30 crc kubenswrapper[4869]: I0314 10:16:30.069678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerStarted","Data":"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42"} Mar 14 10:16:31 crc kubenswrapper[4869]: I0314 10:16:31.082430 4869 generic.go:334] "Generic (PLEG): container finished" podID="87eccc4c-5854-4f20-ab97-32db879a2846" containerID="1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42" exitCode=0 Mar 14 10:16:31 crc kubenswrapper[4869]: I0314 10:16:31.082499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerDied","Data":"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42"} Mar 14 10:16:32 crc kubenswrapper[4869]: I0314 10:16:32.096966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerStarted","Data":"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e"} Mar 14 10:16:32 crc kubenswrapper[4869]: I0314 10:16:32.136039 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-428ll" podStartSLOduration=2.659483226 podStartE2EDuration="5.136010934s" podCreationTimestamp="2026-03-14 10:16:27 +0000 UTC" firstStartedPulling="2026-03-14 10:16:29.058908629 +0000 UTC m=+4742.031190722" lastFinishedPulling="2026-03-14 10:16:31.535436377 +0000 UTC m=+4744.507718430" observedRunningTime="2026-03-14 10:16:32.125553427 +0000 UTC m=+4745.097835540" watchObservedRunningTime="2026-03-14 10:16:32.136010934 +0000 UTC m=+4745.108293017" Mar 14 10:16:34 crc kubenswrapper[4869]: I0314 10:16:34.703930 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:16:34 crc kubenswrapper[4869]: E0314 10:16:34.704886 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:16:37 crc kubenswrapper[4869]: I0314 10:16:37.940764 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:37 crc kubenswrapper[4869]: I0314 10:16:37.941333 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:38 crc kubenswrapper[4869]: I0314 10:16:38.034385 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:38 crc kubenswrapper[4869]: I0314 10:16:38.218678 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:38 crc kubenswrapper[4869]: I0314 10:16:38.287791 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:39 crc kubenswrapper[4869]: I0314 10:16:39.706223 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:16:39 crc kubenswrapper[4869]: E0314 10:16:39.706649 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.181953 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-428ll" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="registry-server" containerID="cri-o://c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e" gracePeriod=2 Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.768665 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.921553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content\") pod \"87eccc4c-5854-4f20-ab97-32db879a2846\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.921754 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities\") pod \"87eccc4c-5854-4f20-ab97-32db879a2846\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.921818 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qg2n\" (UniqueName: \"kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n\") pod \"87eccc4c-5854-4f20-ab97-32db879a2846\" (UID: \"87eccc4c-5854-4f20-ab97-32db879a2846\") " Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.922814 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities" (OuterVolumeSpecName: "utilities") pod "87eccc4c-5854-4f20-ab97-32db879a2846" (UID: "87eccc4c-5854-4f20-ab97-32db879a2846"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:16:40 crc kubenswrapper[4869]: I0314 10:16:40.950711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n" (OuterVolumeSpecName: "kube-api-access-7qg2n") pod "87eccc4c-5854-4f20-ab97-32db879a2846" (UID: "87eccc4c-5854-4f20-ab97-32db879a2846"). InnerVolumeSpecName "kube-api-access-7qg2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.024074 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.024111 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qg2n\" (UniqueName: \"kubernetes.io/projected/87eccc4c-5854-4f20-ab97-32db879a2846-kube-api-access-7qg2n\") on node \"crc\" DevicePath \"\"" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.195661 4869 generic.go:334] "Generic (PLEG): container finished" podID="87eccc4c-5854-4f20-ab97-32db879a2846" containerID="c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e" exitCode=0 Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.195702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerDied","Data":"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e"} Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.195730 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-428ll" event={"ID":"87eccc4c-5854-4f20-ab97-32db879a2846","Type":"ContainerDied","Data":"8decb51c063df10c4892634010d5542fd9dae27a47cfa52c52b933bd74c070e8"} Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.195747 4869 scope.go:117] "RemoveContainer" containerID="c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.195796 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-428ll" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.231669 4869 scope.go:117] "RemoveContainer" containerID="1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.271551 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87eccc4c-5854-4f20-ab97-32db879a2846" (UID: "87eccc4c-5854-4f20-ab97-32db879a2846"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.330971 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87eccc4c-5854-4f20-ab97-32db879a2846-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.544263 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.553863 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-428ll"] Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.738611 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" path="/var/lib/kubelet/pods/87eccc4c-5854-4f20-ab97-32db879a2846/volumes" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.762904 4869 scope.go:117] "RemoveContainer" containerID="e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.795813 4869 scope.go:117] "RemoveContainer" containerID="c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e" Mar 14 10:16:41 crc kubenswrapper[4869]: E0314 10:16:41.796911 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e\": container with ID starting with c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e not found: ID does not exist" containerID="c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.797006 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e"} err="failed to get container status \"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e\": rpc error: code = NotFound desc = could not find container \"c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e\": container with ID starting with c1a2ad2629ec2b5094e0b5e6982ec335f7ae73795b7f59279b547804c1ac5f5e not found: ID does not exist" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.797059 4869 scope.go:117] "RemoveContainer" containerID="1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42" Mar 14 10:16:41 crc kubenswrapper[4869]: E0314 10:16:41.797492 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42\": container with ID starting with 1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42 not found: ID does not exist" containerID="1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.797540 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42"} err="failed to get container status \"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42\": rpc error: code = NotFound desc = could not find container \"1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42\": container with ID starting with 1762844a688049a8a850602c31ceb596921c9ea70c98105c099b75096f708b42 not found: ID does not exist" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.797563 4869 scope.go:117] "RemoveContainer" containerID="e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96" Mar 14 10:16:41 crc kubenswrapper[4869]: E0314 10:16:41.797958 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96\": container with ID starting with e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96 not found: ID does not exist" containerID="e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96" Mar 14 10:16:41 crc kubenswrapper[4869]: I0314 10:16:41.798034 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96"} err="failed to get container status \"e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96\": rpc error: code = NotFound desc = could not find container \"e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96\": container with ID starting with e6bd63c793f888cdbcd9afd24bccc16fff87933d51b08e712dc12c34c4c5cc96 not found: ID does not exist" Mar 14 10:16:46 crc kubenswrapper[4869]: I0314 10:16:46.705166 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:16:46 crc kubenswrapper[4869]: E0314 10:16:46.706733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:16:50 crc kubenswrapper[4869]: I0314 10:16:50.705550 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:16:50 crc kubenswrapper[4869]: E0314 10:16:50.706397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:16:57 crc kubenswrapper[4869]: I0314 10:16:57.713167 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:16:57 crc kubenswrapper[4869]: E0314 10:16:57.714043 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:17:02 crc kubenswrapper[4869]: I0314 10:17:02.704535 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:17:02 crc kubenswrapper[4869]: E0314 10:17:02.705398 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:17:09 crc kubenswrapper[4869]: I0314 10:17:09.705344 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:17:09 crc kubenswrapper[4869]: E0314 10:17:09.706233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:17:17 crc kubenswrapper[4869]: I0314 10:17:17.723963 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:17:17 crc kubenswrapper[4869]: E0314 10:17:17.725188 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:17:22 crc kubenswrapper[4869]: I0314 10:17:22.704822 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:17:22 crc kubenswrapper[4869]: E0314 10:17:22.706063 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.636683 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:31 crc kubenswrapper[4869]: E0314 10:17:31.637859 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="registry-server" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.637877 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="registry-server" Mar 14 10:17:31 crc kubenswrapper[4869]: E0314 10:17:31.637898 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="extract-utilities" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.637906 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="extract-utilities" Mar 14 10:17:31 crc kubenswrapper[4869]: E0314 10:17:31.637917 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="extract-content" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.637924 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="extract-content" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.638102 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="87eccc4c-5854-4f20-ab97-32db879a2846" containerName="registry-server" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.639678 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.659434 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.706018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.706100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.706139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh8p\" (UniqueName: \"kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.709485 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:17:31 crc kubenswrapper[4869]: E0314 10:17:31.711561 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.809213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.809288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.809344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh8p\" (UniqueName: \"kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.810242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.810317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:31 crc kubenswrapper[4869]: I0314 10:17:31.829282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh8p\" (UniqueName: \"kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p\") pod \"redhat-operators-94f4m\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:32 crc kubenswrapper[4869]: I0314 10:17:32.025836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:32 crc kubenswrapper[4869]: I0314 10:17:32.889001 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:33 crc kubenswrapper[4869]: I0314 10:17:33.827393 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerID="82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6" exitCode=0 Mar 14 10:17:33 crc kubenswrapper[4869]: I0314 10:17:33.827450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerDied","Data":"82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6"} Mar 14 10:17:33 crc kubenswrapper[4869]: I0314 10:17:33.827681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerStarted","Data":"e07cda8724e426cacef559a7790e844d3252439cd889617f3e7d392c0923126e"} Mar 14 10:17:33 crc kubenswrapper[4869]: I0314 10:17:33.830022 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:17:34 crc kubenswrapper[4869]: I0314 10:17:34.851001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerStarted","Data":"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5"} Mar 14 10:17:35 crc kubenswrapper[4869]: I0314 10:17:35.859921 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerID="a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5" exitCode=0 Mar 14 10:17:35 crc kubenswrapper[4869]: I0314 10:17:35.859965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerDied","Data":"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5"} Mar 14 10:17:36 crc kubenswrapper[4869]: I0314 10:17:36.703838 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:17:36 crc kubenswrapper[4869]: E0314 10:17:36.704070 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:17:37 crc kubenswrapper[4869]: I0314 10:17:37.888001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerStarted","Data":"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f"} Mar 14 10:17:37 crc kubenswrapper[4869]: I0314 10:17:37.927052 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94f4m" podStartSLOduration=4.160183206 podStartE2EDuration="6.927017769s" podCreationTimestamp="2026-03-14 10:17:31 +0000 UTC" firstStartedPulling="2026-03-14 10:17:33.829780864 +0000 UTC m=+4806.802062917" lastFinishedPulling="2026-03-14 10:17:36.596615417 +0000 UTC m=+4809.568897480" observedRunningTime="2026-03-14 10:17:37.916764678 +0000 UTC m=+4810.889046741" watchObservedRunningTime="2026-03-14 10:17:37.927017769 +0000 UTC m=+4810.899299862" Mar 14 10:17:42 crc kubenswrapper[4869]: I0314 10:17:42.026375 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:42 crc kubenswrapper[4869]: I0314 10:17:42.027086 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:42 crc kubenswrapper[4869]: I0314 10:17:42.704059 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:17:42 crc kubenswrapper[4869]: E0314 10:17:42.704673 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:17:43 crc kubenswrapper[4869]: I0314 10:17:43.111883 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94f4m" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="registry-server" probeResult="failure" output=< Mar 14 10:17:43 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Mar 14 10:17:43 crc kubenswrapper[4869]: > Mar 14 10:17:50 crc kubenswrapper[4869]: I0314 10:17:50.703928 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:17:50 crc kubenswrapper[4869]: E0314 10:17:50.704865 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:17:52 crc kubenswrapper[4869]: I0314 10:17:52.377522 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:52 crc kubenswrapper[4869]: I0314 10:17:52.426845 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:52 crc kubenswrapper[4869]: I0314 10:17:52.614417 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.062081 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94f4m" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="registry-server" containerID="cri-o://a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f" gracePeriod=2 Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.794172 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.917439 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content\") pod \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.917613 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities\") pod \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.917655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzh8p\" (UniqueName: \"kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p\") pod \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\" (UID: \"f2d95ee5-b634-460c-9c95-7a8b0b5c7158\") " Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.919547 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities" (OuterVolumeSpecName: "utilities") pod "f2d95ee5-b634-460c-9c95-7a8b0b5c7158" (UID: "f2d95ee5-b634-460c-9c95-7a8b0b5c7158"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:17:54 crc kubenswrapper[4869]: I0314 10:17:54.931165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p" (OuterVolumeSpecName: "kube-api-access-lzh8p") pod "f2d95ee5-b634-460c-9c95-7a8b0b5c7158" (UID: "f2d95ee5-b634-460c-9c95-7a8b0b5c7158"). InnerVolumeSpecName "kube-api-access-lzh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.020239 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.020287 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzh8p\" (UniqueName: \"kubernetes.io/projected/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-kube-api-access-lzh8p\") on node \"crc\" DevicePath \"\"" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.082712 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2d95ee5-b634-460c-9c95-7a8b0b5c7158" (UID: "f2d95ee5-b634-460c-9c95-7a8b0b5c7158"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.089140 4869 generic.go:334] "Generic (PLEG): container finished" podID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerID="a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f" exitCode=0 Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.089184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerDied","Data":"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f"} Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.089210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94f4m" event={"ID":"f2d95ee5-b634-460c-9c95-7a8b0b5c7158","Type":"ContainerDied","Data":"e07cda8724e426cacef559a7790e844d3252439cd889617f3e7d392c0923126e"} Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.089226 4869 scope.go:117] "RemoveContainer" containerID="a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.089356 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94f4m" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.113974 4869 scope.go:117] "RemoveContainer" containerID="a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.121853 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2d95ee5-b634-460c-9c95-7a8b0b5c7158-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.128796 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.138644 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94f4m"] Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.151785 4869 scope.go:117] "RemoveContainer" containerID="82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.181709 4869 scope.go:117] "RemoveContainer" containerID="a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f" Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.182136 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f\": container with ID starting with a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f not found: ID does not exist" containerID="a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.182166 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f"} err="failed to get container status \"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f\": rpc error: code = NotFound desc = could not find container \"a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f\": container with ID starting with a5535bdda413ff4dada13b8e5dcd5a17e362ba984133f41db6672514c2cf5b7f not found: ID does not exist" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.182186 4869 scope.go:117] "RemoveContainer" containerID="a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5" Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.182551 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5\": container with ID starting with a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5 not found: ID does not exist" containerID="a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.182575 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5"} err="failed to get container status \"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5\": rpc error: code = NotFound desc = could not find container \"a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5\": container with ID starting with a4459ea1ced3750f2393cbee544ac3ee93825aef32951333bb68b440cd83b0a5 not found: ID does not exist" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.182589 4869 scope.go:117] "RemoveContainer" containerID="82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6" Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.182853 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6\": container with ID starting with 82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6 not found: ID does not exist" containerID="82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.182878 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6"} err="failed to get container status \"82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6\": rpc error: code = NotFound desc = could not find container \"82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6\": container with ID starting with 82571e143fe90975abeaaa5478ca15833024f2b26fd0c7c04939ec77278b04d6 not found: ID does not exist" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.727742 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" path="/var/lib/kubelet/pods/f2d95ee5-b634-460c-9c95-7a8b0b5c7158/volumes" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.995001 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck6w6/must-gather-q6h4q"] Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.995789 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="extract-utilities" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.995813 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="extract-utilities" Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.995850 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="registry-server" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.995859 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="registry-server" Mar 14 10:17:55 crc kubenswrapper[4869]: E0314 10:17:55.995873 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="extract-content" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.995882 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="extract-content" Mar 14 10:17:55 crc kubenswrapper[4869]: I0314 10:17:55.996144 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d95ee5-b634-460c-9c95-7a8b0b5c7158" containerName="registry-server" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.001966 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.004091 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ck6w6"/"openshift-service-ca.crt" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.004096 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ck6w6"/"kube-root-ca.crt" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.020418 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ck6w6/must-gather-q6h4q"] Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.042165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g46b\" (UniqueName: \"kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.042390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.144262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.144336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g46b\" (UniqueName: \"kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.144678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.165245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g46b\" (UniqueName: \"kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b\") pod \"must-gather-q6h4q\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.317786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.706468 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:17:56 crc kubenswrapper[4869]: E0314 10:17:56.707819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:17:56 crc kubenswrapper[4869]: I0314 10:17:56.785959 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ck6w6/must-gather-q6h4q"] Mar 14 10:17:57 crc kubenswrapper[4869]: I0314 10:17:57.108345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" event={"ID":"a6c47849-4852-4379-8f28-97955656e693","Type":"ContainerStarted","Data":"c42b960dc51b61a776bf3f6930531974174b8c4470ca9d06eb8828daf4f33db1"} Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.144441 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558058-t44z6"] Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.146526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.149023 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.149092 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.149201 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.153180 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-t44z6"] Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.240017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh8d\" (UniqueName: \"kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d\") pod \"auto-csr-approver-29558058-t44z6\" (UID: \"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1\") " pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.341363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh8d\" (UniqueName: \"kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d\") pod \"auto-csr-approver-29558058-t44z6\" (UID: \"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1\") " pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.358352 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh8d\" (UniqueName: \"kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d\") pod \"auto-csr-approver-29558058-t44z6\" (UID: \"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1\") " pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:00 crc kubenswrapper[4869]: I0314 10:18:00.482750 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:04 crc kubenswrapper[4869]: I0314 10:18:04.065465 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-t44z6"] Mar 14 10:18:04 crc kubenswrapper[4869]: I0314 10:18:04.198558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-t44z6" event={"ID":"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1","Type":"ContainerStarted","Data":"63a38397c904bb6ebad628b0ffc7e2fa815179b27b094626249d77b490358a88"} Mar 14 10:18:04 crc kubenswrapper[4869]: I0314 10:18:04.201066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" event={"ID":"a6c47849-4852-4379-8f28-97955656e693","Type":"ContainerStarted","Data":"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69"} Mar 14 10:18:05 crc kubenswrapper[4869]: I0314 10:18:05.210641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" event={"ID":"a6c47849-4852-4379-8f28-97955656e693","Type":"ContainerStarted","Data":"94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd"} Mar 14 10:18:05 crc kubenswrapper[4869]: I0314 10:18:05.236796 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" podStartSLOduration=3.213753607 podStartE2EDuration="10.23677356s" podCreationTimestamp="2026-03-14 10:17:55 +0000 UTC" firstStartedPulling="2026-03-14 10:17:56.791444033 +0000 UTC m=+4829.763726086" lastFinishedPulling="2026-03-14 10:18:03.814463986 +0000 UTC m=+4836.786746039" observedRunningTime="2026-03-14 10:18:05.226758614 +0000 UTC m=+4838.199040707" watchObservedRunningTime="2026-03-14 10:18:05.23677356 +0000 UTC m=+4838.209055623" Mar 14 10:18:05 crc kubenswrapper[4869]: I0314 10:18:05.704348 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:18:05 crc kubenswrapper[4869]: E0314 10:18:05.704884 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:18:06 crc kubenswrapper[4869]: I0314 10:18:06.220796 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" containerID="edac00d5f6752397c0da3a18d8a7e5c23cdbae66fb70d1e951537bef6cb524ad" exitCode=0 Mar 14 10:18:06 crc kubenswrapper[4869]: I0314 10:18:06.220845 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-t44z6" event={"ID":"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1","Type":"ContainerDied","Data":"edac00d5f6752397c0da3a18d8a7e5c23cdbae66fb70d1e951537bef6cb524ad"} Mar 14 10:18:07 crc kubenswrapper[4869]: I0314 10:18:07.632751 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:07 crc kubenswrapper[4869]: I0314 10:18:07.703148 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh8d\" (UniqueName: \"kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d\") pod \"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1\" (UID: \"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1\") " Mar 14 10:18:07 crc kubenswrapper[4869]: I0314 10:18:07.714813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d" (OuterVolumeSpecName: "kube-api-access-9nh8d") pod "aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" (UID: "aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1"). InnerVolumeSpecName "kube-api-access-9nh8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:07 crc kubenswrapper[4869]: I0314 10:18:07.807611 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh8d\" (UniqueName: \"kubernetes.io/projected/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1-kube-api-access-9nh8d\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.253470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558058-t44z6" event={"ID":"aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1","Type":"ContainerDied","Data":"63a38397c904bb6ebad628b0ffc7e2fa815179b27b094626249d77b490358a88"} Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.253764 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a38397c904bb6ebad628b0ffc7e2fa815179b27b094626249d77b490358a88" Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.253574 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558058-t44z6" Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.705194 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:18:08 crc kubenswrapper[4869]: E0314 10:18:08.705377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.708696 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-66gpb"] Mar 14 10:18:08 crc kubenswrapper[4869]: I0314 10:18:08.722471 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558052-66gpb"] Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.605629 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.605946 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.714289 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccaafb2e-9b8b-4109-8d5b-cc37094a36f5" path="/var/lib/kubelet/pods/ccaafb2e-9b8b-4109-8d5b-cc37094a36f5/volumes" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.891557 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-dxtmr"] Mar 14 10:18:09 crc kubenswrapper[4869]: E0314 10:18:09.891945 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" containerName="oc" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.891962 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" containerName="oc" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.892165 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" containerName="oc" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.892823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.894488 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ck6w6"/"default-dockercfg-9dnnr" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.946387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:09 crc kubenswrapper[4869]: I0314 10:18:09.946629 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzr4\" (UniqueName: \"kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.048067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzr4\" (UniqueName: \"kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.048175 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.048408 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.081834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzr4\" (UniqueName: \"kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4\") pod \"crc-debug-dxtmr\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.208392 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:10 crc kubenswrapper[4869]: W0314 10:18:10.249481 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd21ca767_0c53_4858_8293_1427b433aac3.slice/crio-74cf67e022f58a5e9892f213668e251c7dd306d70d908b05c760b96a86332f7e WatchSource:0}: Error finding container 74cf67e022f58a5e9892f213668e251c7dd306d70d908b05c760b96a86332f7e: Status 404 returned error can't find the container with id 74cf67e022f58a5e9892f213668e251c7dd306d70d908b05c760b96a86332f7e Mar 14 10:18:10 crc kubenswrapper[4869]: I0314 10:18:10.283782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" event={"ID":"d21ca767-0c53-4858-8293-1427b433aac3","Type":"ContainerStarted","Data":"74cf67e022f58a5e9892f213668e251c7dd306d70d908b05c760b96a86332f7e"} Mar 14 10:18:18 crc kubenswrapper[4869]: I0314 10:18:18.704292 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:18:18 crc kubenswrapper[4869]: E0314 10:18:18.705229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:18:22 crc kubenswrapper[4869]: I0314 10:18:22.416382 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" event={"ID":"d21ca767-0c53-4858-8293-1427b433aac3","Type":"ContainerStarted","Data":"aac200f64dd5079663d660cb446d48fd8f2e90a7139d8424bd520f00495c78c8"} Mar 14 10:18:22 crc kubenswrapper[4869]: I0314 10:18:22.429273 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" podStartSLOduration=1.6893168379999999 podStartE2EDuration="13.429257672s" podCreationTimestamp="2026-03-14 10:18:09 +0000 UTC" firstStartedPulling="2026-03-14 10:18:10.253105188 +0000 UTC m=+4843.225387241" lastFinishedPulling="2026-03-14 10:18:21.993046012 +0000 UTC m=+4854.965328075" observedRunningTime="2026-03-14 10:18:22.427491908 +0000 UTC m=+4855.399773961" watchObservedRunningTime="2026-03-14 10:18:22.429257672 +0000 UTC m=+4855.401539725" Mar 14 10:18:22 crc kubenswrapper[4869]: I0314 10:18:22.705101 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:18:22 crc kubenswrapper[4869]: E0314 10:18:22.705709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:18:29 crc kubenswrapper[4869]: I0314 10:18:29.704955 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:18:29 crc kubenswrapper[4869]: E0314 10:18:29.705894 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:18:37 crc kubenswrapper[4869]: I0314 10:18:37.718203 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:18:38 crc kubenswrapper[4869]: I0314 10:18:38.547313 4869 generic.go:334] "Generic (PLEG): container finished" podID="d21ca767-0c53-4858-8293-1427b433aac3" containerID="aac200f64dd5079663d660cb446d48fd8f2e90a7139d8424bd520f00495c78c8" exitCode=0 Mar 14 10:18:38 crc kubenswrapper[4869]: I0314 10:18:38.547410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" event={"ID":"d21ca767-0c53-4858-8293-1427b433aac3","Type":"ContainerDied","Data":"aac200f64dd5079663d660cb446d48fd8f2e90a7139d8424bd520f00495c78c8"} Mar 14 10:18:38 crc kubenswrapper[4869]: I0314 10:18:38.550017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7"} Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.605350 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.605894 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.651691 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.729890 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-dxtmr"] Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.729939 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-dxtmr"] Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.766124 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzr4\" (UniqueName: \"kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4\") pod \"d21ca767-0c53-4858-8293-1427b433aac3\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.766266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host\") pod \"d21ca767-0c53-4858-8293-1427b433aac3\" (UID: \"d21ca767-0c53-4858-8293-1427b433aac3\") " Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.766406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host" (OuterVolumeSpecName: "host") pod "d21ca767-0c53-4858-8293-1427b433aac3" (UID: "d21ca767-0c53-4858-8293-1427b433aac3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.766900 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d21ca767-0c53-4858-8293-1427b433aac3-host\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.780660 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4" (OuterVolumeSpecName: "kube-api-access-jmzr4") pod "d21ca767-0c53-4858-8293-1427b433aac3" (UID: "d21ca767-0c53-4858-8293-1427b433aac3"). InnerVolumeSpecName "kube-api-access-jmzr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:39 crc kubenswrapper[4869]: I0314 10:18:39.868296 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzr4\" (UniqueName: \"kubernetes.io/projected/d21ca767-0c53-4858-8293-1427b433aac3-kube-api-access-jmzr4\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.567382 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cf67e022f58a5e9892f213668e251c7dd306d70d908b05c760b96a86332f7e" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.567493 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-dxtmr" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.899669 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-h6jn6"] Mar 14 10:18:40 crc kubenswrapper[4869]: E0314 10:18:40.900860 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21ca767-0c53-4858-8293-1427b433aac3" containerName="container-00" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.900947 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21ca767-0c53-4858-8293-1427b433aac3" containerName="container-00" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.901230 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21ca767-0c53-4858-8293-1427b433aac3" containerName="container-00" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.901988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.904041 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ck6w6"/"default-dockercfg-9dnnr" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.988904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:40 crc kubenswrapper[4869]: I0314 10:18:40.988970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqhhx\" (UniqueName: \"kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.091125 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.091579 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqhhx\" (UniqueName: \"kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.091382 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.110333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqhhx\" (UniqueName: \"kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx\") pod \"crc-debug-h6jn6\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.225037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:41 crc kubenswrapper[4869]: W0314 10:18:41.257113 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e5af3f6_65a3_4271_a161_f6549fdd81d9.slice/crio-c8bad49f9914261a5ac4440d8d52fc77b95ff000f0074bf8afc4584f1477c764 WatchSource:0}: Error finding container c8bad49f9914261a5ac4440d8d52fc77b95ff000f0074bf8afc4584f1477c764: Status 404 returned error can't find the container with id c8bad49f9914261a5ac4440d8d52fc77b95ff000f0074bf8afc4584f1477c764 Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.596183 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5af3f6-65a3-4271-a161-f6549fdd81d9" containerID="beb52793d0536c9fbb2227a2a99bca1d11e898bad2f1ca0237ac870126bafc0b" exitCode=1 Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.596334 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" event={"ID":"4e5af3f6-65a3-4271-a161-f6549fdd81d9","Type":"ContainerDied","Data":"beb52793d0536c9fbb2227a2a99bca1d11e898bad2f1ca0237ac870126bafc0b"} Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.596470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" event={"ID":"4e5af3f6-65a3-4271-a161-f6549fdd81d9","Type":"ContainerStarted","Data":"c8bad49f9914261a5ac4440d8d52fc77b95ff000f0074bf8afc4584f1477c764"} Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.653821 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-h6jn6"] Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.668093 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck6w6/crc-debug-h6jn6"] Mar 14 10:18:41 crc kubenswrapper[4869]: I0314 10:18:41.714478 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21ca767-0c53-4858-8293-1427b433aac3" path="/var/lib/kubelet/pods/d21ca767-0c53-4858-8293-1427b433aac3/volumes" Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.694857 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.824792 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host\") pod \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.824899 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host" (OuterVolumeSpecName: "host") pod "4e5af3f6-65a3-4271-a161-f6549fdd81d9" (UID: "4e5af3f6-65a3-4271-a161-f6549fdd81d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.824952 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqhhx\" (UniqueName: \"kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx\") pod \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\" (UID: \"4e5af3f6-65a3-4271-a161-f6549fdd81d9\") " Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.825675 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e5af3f6-65a3-4271-a161-f6549fdd81d9-host\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.837690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx" (OuterVolumeSpecName: "kube-api-access-rqhhx") pod "4e5af3f6-65a3-4271-a161-f6549fdd81d9" (UID: "4e5af3f6-65a3-4271-a161-f6549fdd81d9"). InnerVolumeSpecName "kube-api-access-rqhhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:18:42 crc kubenswrapper[4869]: I0314 10:18:42.927607 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqhhx\" (UniqueName: \"kubernetes.io/projected/4e5af3f6-65a3-4271-a161-f6549fdd81d9-kube-api-access-rqhhx\") on node \"crc\" DevicePath \"\"" Mar 14 10:18:43 crc kubenswrapper[4869]: I0314 10:18:43.614537 4869 scope.go:117] "RemoveContainer" containerID="beb52793d0536c9fbb2227a2a99bca1d11e898bad2f1ca0237ac870126bafc0b" Mar 14 10:18:43 crc kubenswrapper[4869]: I0314 10:18:43.614908 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/crc-debug-h6jn6" Mar 14 10:18:43 crc kubenswrapper[4869]: I0314 10:18:43.713052 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5af3f6-65a3-4271-a161-f6549fdd81d9" path="/var/lib/kubelet/pods/4e5af3f6-65a3-4271-a161-f6549fdd81d9/volumes" Mar 14 10:18:44 crc kubenswrapper[4869]: I0314 10:18:44.538587 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:18:44 crc kubenswrapper[4869]: I0314 10:18:44.538958 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:18:44 crc kubenswrapper[4869]: I0314 10:18:44.704186 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:18:44 crc kubenswrapper[4869]: E0314 10:18:44.704408 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:18:46 crc kubenswrapper[4869]: I0314 10:18:46.650973 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" exitCode=1 Mar 14 10:18:46 crc kubenswrapper[4869]: I0314 10:18:46.652653 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7"} Mar 14 10:18:46 crc kubenswrapper[4869]: I0314 10:18:46.652802 4869 scope.go:117] "RemoveContainer" containerID="b1ecc1953326db58ea3cd5237baf10dff415202eb21f6c57d1cf9ba9ae578a8a" Mar 14 10:18:46 crc kubenswrapper[4869]: I0314 10:18:46.654035 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:18:46 crc kubenswrapper[4869]: E0314 10:18:46.654534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:18:54 crc kubenswrapper[4869]: I0314 10:18:54.538523 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:18:54 crc kubenswrapper[4869]: I0314 10:18:54.540306 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:18:54 crc kubenswrapper[4869]: I0314 10:18:54.541257 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:18:54 crc kubenswrapper[4869]: E0314 10:18:54.541489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:18:54 crc kubenswrapper[4869]: I0314 10:18:54.747042 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:18:54 crc kubenswrapper[4869]: E0314 10:18:54.747320 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:18:55 crc kubenswrapper[4869]: I0314 10:18:55.704494 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:18:55 crc kubenswrapper[4869]: E0314 10:18:55.705077 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:06 crc kubenswrapper[4869]: I0314 10:19:06.704804 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:19:06 crc kubenswrapper[4869]: E0314 10:19:06.705946 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:08 crc kubenswrapper[4869]: I0314 10:19:08.991969 4869 scope.go:117] "RemoveContainer" containerID="29acf5fb14c6fd88a2d8d211c3e748819ab58c6ee95c4dca7e55119b5abeeaef" Mar 14 10:19:09 crc kubenswrapper[4869]: I0314 10:19:09.605911 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:19:09 crc kubenswrapper[4869]: I0314 10:19:09.606259 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:19:09 crc kubenswrapper[4869]: I0314 10:19:09.606315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 10:19:09 crc kubenswrapper[4869]: I0314 10:19:09.607356 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:19:09 crc kubenswrapper[4869]: I0314 10:19:09.607457 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba" gracePeriod=600 Mar 14 10:19:10 crc kubenswrapper[4869]: I0314 10:19:10.704091 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:19:10 crc kubenswrapper[4869]: E0314 10:19:10.704606 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:19:10 crc kubenswrapper[4869]: I0314 10:19:10.925524 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba" exitCode=0 Mar 14 10:19:10 crc kubenswrapper[4869]: I0314 10:19:10.925553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba"} Mar 14 10:19:10 crc kubenswrapper[4869]: I0314 10:19:10.925616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerStarted","Data":"9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6"} Mar 14 10:19:10 crc kubenswrapper[4869]: I0314 10:19:10.925641 4869 scope.go:117] "RemoveContainer" containerID="15fadbad376b7e775dd9733a1a436fa0de2ba0409f8741d6f54384e28a22b400" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.637000 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:14 crc kubenswrapper[4869]: E0314 10:19:14.637883 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5af3f6-65a3-4271-a161-f6549fdd81d9" containerName="container-00" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.637899 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5af3f6-65a3-4271-a161-f6549fdd81d9" containerName="container-00" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.638175 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5af3f6-65a3-4271-a161-f6549fdd81d9" containerName="container-00" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.640358 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.663189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.831817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.832713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.832975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgqv\" (UniqueName: \"kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.934698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.934761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsgqv\" (UniqueName: \"kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.934803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.935450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.935735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:14 crc kubenswrapper[4869]: I0314 10:19:14.967108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsgqv\" (UniqueName: \"kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv\") pod \"redhat-marketplace-gf4hc\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:15 crc kubenswrapper[4869]: I0314 10:19:15.266023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:15 crc kubenswrapper[4869]: I0314 10:19:15.807421 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:15 crc kubenswrapper[4869]: W0314 10:19:15.807899 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602022af_c69f_4650_aae6_cfc712df8f95.slice/crio-f5d64d0a4b65e836a3b8f20f1ae4ea6a91978f418394ee114f3e7ede6c82ece5 WatchSource:0}: Error finding container f5d64d0a4b65e836a3b8f20f1ae4ea6a91978f418394ee114f3e7ede6c82ece5: Status 404 returned error can't find the container with id f5d64d0a4b65e836a3b8f20f1ae4ea6a91978f418394ee114f3e7ede6c82ece5 Mar 14 10:19:15 crc kubenswrapper[4869]: I0314 10:19:15.972197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerStarted","Data":"f5d64d0a4b65e836a3b8f20f1ae4ea6a91978f418394ee114f3e7ede6c82ece5"} Mar 14 10:19:16 crc kubenswrapper[4869]: I0314 10:19:16.981526 4869 generic.go:334] "Generic (PLEG): container finished" podID="602022af-c69f-4650-aae6-cfc712df8f95" containerID="b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae" exitCode=0 Mar 14 10:19:16 crc kubenswrapper[4869]: I0314 10:19:16.981612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerDied","Data":"b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae"} Mar 14 10:19:17 crc kubenswrapper[4869]: I0314 10:19:17.992819 4869 generic.go:334] "Generic (PLEG): container finished" podID="602022af-c69f-4650-aae6-cfc712df8f95" containerID="1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f" exitCode=0 Mar 14 10:19:17 crc kubenswrapper[4869]: I0314 10:19:17.993077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerDied","Data":"1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f"} Mar 14 10:19:18 crc kubenswrapper[4869]: I0314 10:19:18.705671 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:19:19 crc kubenswrapper[4869]: I0314 10:19:19.003082 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerStarted","Data":"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8"} Mar 14 10:19:19 crc kubenswrapper[4869]: I0314 10:19:19.027631 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gf4hc" podStartSLOduration=3.647747301 podStartE2EDuration="5.027612856s" podCreationTimestamp="2026-03-14 10:19:14 +0000 UTC" firstStartedPulling="2026-03-14 10:19:16.983805751 +0000 UTC m=+4909.956087804" lastFinishedPulling="2026-03-14 10:19:18.363671306 +0000 UTC m=+4911.335953359" observedRunningTime="2026-03-14 10:19:19.020199815 +0000 UTC m=+4911.992481908" watchObservedRunningTime="2026-03-14 10:19:19.027612856 +0000 UTC m=+4911.999894919" Mar 14 10:19:20 crc kubenswrapper[4869]: I0314 10:19:20.013190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da"} Mar 14 10:19:22 crc kubenswrapper[4869]: I0314 10:19:22.705099 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:19:22 crc kubenswrapper[4869]: E0314 10:19:22.707786 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:19:24 crc kubenswrapper[4869]: I0314 10:19:24.404975 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:19:24 crc kubenswrapper[4869]: I0314 10:19:24.405316 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:19:25 crc kubenswrapper[4869]: I0314 10:19:25.266929 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:25 crc kubenswrapper[4869]: I0314 10:19:25.267228 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:25 crc kubenswrapper[4869]: I0314 10:19:25.315803 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:26 crc kubenswrapper[4869]: I0314 10:19:26.119695 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:26 crc kubenswrapper[4869]: I0314 10:19:26.171725 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.089360 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" exitCode=1 Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.089436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da"} Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.089816 4869 scope.go:117] "RemoveContainer" containerID="3ed166fbe2efc4f3e2ce4e22e3d306e7fb77bffbabc93c8ebc8193a473304dea" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.089984 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gf4hc" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="registry-server" containerID="cri-o://8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8" gracePeriod=2 Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.090823 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:19:28 crc kubenswrapper[4869]: E0314 10:19:28.091116 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.694037 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.813587 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities\") pod \"602022af-c69f-4650-aae6-cfc712df8f95\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.813671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content\") pod \"602022af-c69f-4650-aae6-cfc712df8f95\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.813761 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgqv\" (UniqueName: \"kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv\") pod \"602022af-c69f-4650-aae6-cfc712df8f95\" (UID: \"602022af-c69f-4650-aae6-cfc712df8f95\") " Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.814610 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities" (OuterVolumeSpecName: "utilities") pod "602022af-c69f-4650-aae6-cfc712df8f95" (UID: "602022af-c69f-4650-aae6-cfc712df8f95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.825658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv" (OuterVolumeSpecName: "kube-api-access-nsgqv") pod "602022af-c69f-4650-aae6-cfc712df8f95" (UID: "602022af-c69f-4650-aae6-cfc712df8f95"). InnerVolumeSpecName "kube-api-access-nsgqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.844240 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602022af-c69f-4650-aae6-cfc712df8f95" (UID: "602022af-c69f-4650-aae6-cfc712df8f95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.916676 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.916713 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602022af-c69f-4650-aae6-cfc712df8f95-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:28 crc kubenswrapper[4869]: I0314 10:19:28.916728 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsgqv\" (UniqueName: \"kubernetes.io/projected/602022af-c69f-4650-aae6-cfc712df8f95-kube-api-access-nsgqv\") on node \"crc\" DevicePath \"\"" Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.107007 4869 generic.go:334] "Generic (PLEG): container finished" podID="602022af-c69f-4650-aae6-cfc712df8f95" containerID="8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8" exitCode=0 Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.107053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerDied","Data":"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8"} Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.107089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4hc" event={"ID":"602022af-c69f-4650-aae6-cfc712df8f95","Type":"ContainerDied","Data":"f5d64d0a4b65e836a3b8f20f1ae4ea6a91978f418394ee114f3e7ede6c82ece5"} Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.107125 4869 scope.go:117] "RemoveContainer" containerID="8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8" Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.107132 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4hc" Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.139303 4869 scope.go:117] "RemoveContainer" containerID="1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f" Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.162298 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.172801 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4hc"] Mar 14 10:19:29 crc kubenswrapper[4869]: I0314 10:19:29.733174 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602022af-c69f-4650-aae6-cfc712df8f95" path="/var/lib/kubelet/pods/602022af-c69f-4650-aae6-cfc712df8f95/volumes" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.019054 4869 scope.go:117] "RemoveContainer" containerID="b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.074578 4869 scope.go:117] "RemoveContainer" containerID="8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8" Mar 14 10:19:30 crc kubenswrapper[4869]: E0314 10:19:30.075353 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8\": container with ID starting with 8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8 not found: ID does not exist" containerID="8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.075411 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8"} err="failed to get container status \"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8\": rpc error: code = NotFound desc = could not find container \"8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8\": container with ID starting with 8b3d6361fad2403340a0a2d811860edf7b2cf426e590da424bf3dc1dc67f1da8 not found: ID does not exist" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.075442 4869 scope.go:117] "RemoveContainer" containerID="1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f" Mar 14 10:19:30 crc kubenswrapper[4869]: E0314 10:19:30.075977 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f\": container with ID starting with 1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f not found: ID does not exist" containerID="1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.076024 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f"} err="failed to get container status \"1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f\": rpc error: code = NotFound desc = could not find container \"1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f\": container with ID starting with 1f39947ee03fc8aa3de6e8a43a5425c92361f70ddc5a232de6a2bf2009a6ca6f not found: ID does not exist" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.076051 4869 scope.go:117] "RemoveContainer" containerID="b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae" Mar 14 10:19:30 crc kubenswrapper[4869]: E0314 10:19:30.076477 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae\": container with ID starting with b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae not found: ID does not exist" containerID="b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae" Mar 14 10:19:30 crc kubenswrapper[4869]: I0314 10:19:30.076522 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae"} err="failed to get container status \"b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae\": rpc error: code = NotFound desc = could not find container \"b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae\": container with ID starting with b07b3ab70fa6050ee7ca432124166f7a1dd7f2df6bd8dc814e40736d7f13f7ae not found: ID does not exist" Mar 14 10:19:33 crc kubenswrapper[4869]: I0314 10:19:33.704857 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:19:33 crc kubenswrapper[4869]: E0314 10:19:33.705808 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:19:34 crc kubenswrapper[4869]: I0314 10:19:34.404476 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:19:34 crc kubenswrapper[4869]: I0314 10:19:34.405197 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:19:34 crc kubenswrapper[4869]: E0314 10:19:34.405551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:34 crc kubenswrapper[4869]: I0314 10:19:34.406269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:19:35 crc kubenswrapper[4869]: I0314 10:19:35.183426 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:19:35 crc kubenswrapper[4869]: E0314 10:19:35.184429 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.370313 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66db8b8f5d-6bxhh_982517b3-3240-45ca-9dcd-79f7a7a648a1/barbican-api/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.539864 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66db8b8f5d-6bxhh_982517b3-3240-45ca-9dcd-79f7a7a648a1/barbican-api-log/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.603256 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-768f98d44b-nmkh7_6d71cfc4-b9dc-4fe1-be63-7da133a49f08/barbican-keystone-listener/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.620136 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-768f98d44b-nmkh7_6d71cfc4-b9dc-4fe1-be63-7da133a49f08/barbican-keystone-listener-log/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.790091 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659c5f77bf-p8tvx_c8af003e-d2bd-4748-b27c-5cdcb2e7914f/barbican-worker-log/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.823428 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-659c5f77bf-p8tvx_c8af003e-d2bd-4748-b27c-5cdcb2e7914f/barbican-worker/0.log" Mar 14 10:19:41 crc kubenswrapper[4869]: I0314 10:19:41.983731 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c7abd091-5889-4e4e-8f12-24f0bcba5262/ceilometer-notification-agent/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.016348 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c7abd091-5889-4e4e-8f12-24f0bcba5262/ceilometer-central-agent/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.051255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c7abd091-5889-4e4e-8f12-24f0bcba5262/sg-core/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.060261 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c7abd091-5889-4e4e-8f12-24f0bcba5262/proxy-httpd/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.276449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e821fb2e-1d49-4ae2-9404-1e6efa9009a5/cinder-api-log/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.376027 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e821fb2e-1d49-4ae2-9404-1e6efa9009a5/cinder-api/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.464505 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3/probe/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.500758 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c8bc0b5d-d2e2-4be2-91f9-b60e43164ea3/cinder-scheduler/0.log" Mar 14 10:19:42 crc kubenswrapper[4869]: I0314 10:19:42.580547 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64674776dc-mx7wm_1d883534-96aa-48f1-97bb-01a43f7634f4/init/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.338825 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64674776dc-mx7wm_1d883534-96aa-48f1-97bb-01a43f7634f4/init/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.339293 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64674776dc-mx7wm_1d883534-96aa-48f1-97bb-01a43f7634f4/dnsmasq-dns/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.379626 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ece40c5-10b0-4c1e-8985-99ccf56b5cfb/glance-httpd/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.534129 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_1ece40c5-10b0-4c1e-8985-99ccf56b5cfb/glance-log/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.616184 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5facda51-8081-455a-93ee-ca02ca6e6e55/glance-log/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.634770 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5facda51-8081-455a-93ee-ca02ca6e6e55/glance-httpd/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.818029 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b646449c6-8g8ql_c776b1be-07b2-4de0-808f-48c9a550aaa4/horizon-log/0.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.912101 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b646449c6-8g8ql_c776b1be-07b2-4de0-808f-48c9a550aaa4/horizon/16.log" Mar 14 10:19:43 crc kubenswrapper[4869]: I0314 10:19:43.918720 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b646449c6-8g8ql_c776b1be-07b2-4de0-808f-48c9a550aaa4/horizon/16.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.054923 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9d48d6888-26pm7_90750956-6a92-4c2c-8213-07cd62712ba1/horizon/16.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.105113 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9d48d6888-26pm7_90750956-6a92-4c2c-8213-07cd62712ba1/horizon-log/0.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.202978 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9d48d6888-26pm7_90750956-6a92-4c2c-8213-07cd62712ba1/horizon/16.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.361883 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-857df8f9c4-4hrpr_ec510507-5c39-486f-839f-501fb07a1d07/keystone-api/0.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.416048 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29558041-bpmml_1c2a8743-c0f6-4e8b-b47f-157d2b478e00/keystone-cron/0.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.550176 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b2ebe80d-8ef3-4dac-b796-1c0ced4ad905/kube-state-metrics/0.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.703792 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:19:44 crc kubenswrapper[4869]: E0314 10:19:44.704310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.876130 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cd657fd5-hrb28_cc3b5757-7791-4168-9d0b-0425525fc6b9/neutron-httpd/0.log" Mar 14 10:19:44 crc kubenswrapper[4869]: I0314 10:19:44.888893 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75cd657fd5-hrb28_cc3b5757-7791-4168-9d0b-0425525fc6b9/neutron-api/0.log" Mar 14 10:19:45 crc kubenswrapper[4869]: I0314 10:19:45.704312 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da13efd4-046a-4059-9b04-b731f2d164b5/setup-container/0.log" Mar 14 10:19:45 crc kubenswrapper[4869]: I0314 10:19:45.888960 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da13efd4-046a-4059-9b04-b731f2d164b5/setup-container/0.log" Mar 14 10:19:45 crc kubenswrapper[4869]: I0314 10:19:45.967376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_da13efd4-046a-4059-9b04-b731f2d164b5/rabbitmq/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.243452 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c2bc163-f581-4326-90e9-2011f06c6c7f/nova-api-log/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.344459 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1b2bba6c-7e82-4c83-ba2c-c09eebec2ddc/nova-cell0-conductor-conductor/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.390241 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c2bc163-f581-4326-90e9-2011f06c6c7f/nova-api-api/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.704203 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:19:46 crc kubenswrapper[4869]: E0314 10:19:46.704406 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.762290 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_da23b22b-973c-422a-8e5a-3ce03f11c458/nova-cell1-conductor-conductor/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.786302 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_cccb9f3d-777d-41b6-8a9e-60e91b9fe556/nova-cell1-novncproxy-novncproxy/0.log" Mar 14 10:19:46 crc kubenswrapper[4869]: I0314 10:19:46.886052 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_13396f06-e344-4bac-996f-aea1d8f3f547/nova-metadata-log/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.177731 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_80f6544c-c2ee-4b23-9de0-2b46a87aabe7/nova-scheduler-scheduler/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.254973 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b16088c-48ba-4c09-91b1-a0447bced81b/mysql-bootstrap/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.424313 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b16088c-48ba-4c09-91b1-a0447bced81b/mysql-bootstrap/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.444457 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2b16088c-48ba-4c09-91b1-a0447bced81b/galera/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.644147 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d42f4faa-b0db-40b7-acd5-c89f1eaf19ff/mysql-bootstrap/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.805969 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d42f4faa-b0db-40b7-acd5-c89f1eaf19ff/mysql-bootstrap/0.log" Mar 14 10:19:47 crc kubenswrapper[4869]: I0314 10:19:47.861610 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d42f4faa-b0db-40b7-acd5-c89f1eaf19ff/galera/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.017609 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_35c6d1fd-be8f-4390-9199-bf573760717b/openstackclient/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.137336 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w9t8k_67f9eed2-67db-4563-8642-5da1a1198e3e/openstack-network-exporter/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.324093 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rllnb_8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2/ovsdb-server-init/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.494241 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rllnb_8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2/ovsdb-server-init/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.510339 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_13396f06-e344-4bac-996f-aea1d8f3f547/nova-metadata-metadata/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.527370 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rllnb_8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2/ovs-vswitchd/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.557268 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rllnb_8f2e84cb-3fc4-4d32-87dc-a9e81a51cea2/ovsdb-server/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.769353 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vznj2_e8735cd0-7d17-4b28-b5fb-99219798ee6f/ovn-controller/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.822285 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_278ce4ec-200c-403d-b2a5-b69101f3e5aa/openstack-network-exporter/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.925380 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_278ce4ec-200c-403d-b2a5-b69101f3e5aa/ovn-northd/0.log" Mar 14 10:19:48 crc kubenswrapper[4869]: I0314 10:19:48.983435 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e841cbaa-b100-4321-9b08-f5725aee3408/openstack-network-exporter/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.121761 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e841cbaa-b100-4321-9b08-f5725aee3408/ovsdbserver-nb/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.182281 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9b048a42-637e-49e6-bdfd-ba3d574e5e4b/openstack-network-exporter/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.303468 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9b048a42-637e-49e6-bdfd-ba3d574e5e4b/ovsdbserver-sb/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.399752 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6966c9cd66-p4jg9_e5be45b6-5241-4347-b552-b1dc75178894/placement-api/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.469488 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6966c9cd66-p4jg9_e5be45b6-5241-4347-b552-b1dc75178894/placement-log/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.550801 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_310e10f6-6126-4199-bc3f-e386680b8acb/init-config-reloader/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.798035 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_310e10f6-6126-4199-bc3f-e386680b8acb/prometheus/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.806569 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_310e10f6-6126-4199-bc3f-e386680b8acb/init-config-reloader/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.835696 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_310e10f6-6126-4199-bc3f-e386680b8acb/config-reloader/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.846544 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_310e10f6-6126-4199-bc3f-e386680b8acb/thanos-sidecar/0.log" Mar 14 10:19:49 crc kubenswrapper[4869]: I0314 10:19:49.990240 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9735b30c-8379-4478-9460-51882d519d32/setup-container/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.254206 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9735b30c-8379-4478-9460-51882d519d32/setup-container/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.265960 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9735b30c-8379-4478-9460-51882d519d32/rabbitmq/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.350229 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38c3b4a0-0639-4d3b-ae4f-3e272522326f/setup-container/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.538393 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38c3b4a0-0639-4d3b-ae4f-3e272522326f/setup-container/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.623185 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_38c3b4a0-0639-4d3b-ae4f-3e272522326f/rabbitmq/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.735626 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74555fbb85-j9lkj_c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46/proxy-server/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.745232 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-74555fbb85-j9lkj_c52e9e90-9e7a-4b0e-b39e-b5f8c21b6e46/proxy-httpd/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.845605 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ql8h6_1321f800-bd9a-41b6-9bfc-b4f48a644230/swift-ring-rebalance/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.937932 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/account-auditor/0.log" Mar 14 10:19:50 crc kubenswrapper[4869]: I0314 10:19:50.978842 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/account-reaper/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.084838 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/account-replicator/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.133126 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/account-server/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.164373 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/container-auditor/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.236360 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/container-replicator/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.310312 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/container-server/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.358905 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/container-updater/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.401213 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/object-auditor/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.460127 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/object-expirer/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.529573 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/object-replicator/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.598094 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/object-server/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.609190 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/object-updater/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.694028 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/rsync/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.745479 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8089ea8f-74c0-4fa4-93bd-dc107394a9e5/swift-recon-cron/0.log" Mar 14 10:19:51 crc kubenswrapper[4869]: I0314 10:19:51.964218 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_781f0f92-429c-4028-8617-3c5249f510bd/watcher-api-log/0.log" Mar 14 10:19:52 crc kubenswrapper[4869]: I0314 10:19:52.140041 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_28596820-2a8d-4347-afec-5e32a58a0398/watcher-applier/0.log" Mar 14 10:19:52 crc kubenswrapper[4869]: I0314 10:19:52.452658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_0795b1cf-4f11-46ad-b29c-7af7c9016c01/watcher-decision-engine/0.log" Mar 14 10:19:54 crc kubenswrapper[4869]: I0314 10:19:54.383115 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_781f0f92-429c-4028-8617-3c5249f510bd/watcher-api/0.log" Mar 14 10:19:58 crc kubenswrapper[4869]: I0314 10:19:58.704076 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:19:58 crc kubenswrapper[4869]: E0314 10:19:58.704942 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.139422 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558060-hm855"] Mar 14 10:20:00 crc kubenswrapper[4869]: E0314 10:20:00.140785 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="extract-utilities" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.140807 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="extract-utilities" Mar 14 10:20:00 crc kubenswrapper[4869]: E0314 10:20:00.140828 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="registry-server" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.140835 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="registry-server" Mar 14 10:20:00 crc kubenswrapper[4869]: E0314 10:20:00.140870 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="extract-content" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.140875 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="extract-content" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.141063 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="602022af-c69f-4650-aae6-cfc712df8f95" containerName="registry-server" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.141704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.143270 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.144021 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.144224 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.152924 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-hm855"] Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.188627 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4f89c32b-b055-4d5e-aa56-a5f41553707c/memcached/0.log" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.271975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grfht\" (UniqueName: \"kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht\") pod \"auto-csr-approver-29558060-hm855\" (UID: \"1442f731-fd67-4328-a0e3-da23c33e97d7\") " pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.394011 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grfht\" (UniqueName: \"kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht\") pod \"auto-csr-approver-29558060-hm855\" (UID: \"1442f731-fd67-4328-a0e3-da23c33e97d7\") " pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.416224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grfht\" (UniqueName: \"kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht\") pod \"auto-csr-approver-29558060-hm855\" (UID: \"1442f731-fd67-4328-a0e3-da23c33e97d7\") " pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.504425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.703624 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:20:00 crc kubenswrapper[4869]: E0314 10:20:00.704177 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:20:00 crc kubenswrapper[4869]: I0314 10:20:00.956015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-hm855"] Mar 14 10:20:01 crc kubenswrapper[4869]: I0314 10:20:01.416361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-hm855" event={"ID":"1442f731-fd67-4328-a0e3-da23c33e97d7","Type":"ContainerStarted","Data":"4595829bd0d0b2dfd0fcbb08175b77c087d6e26e91d2e6d9e31bfee67f867ce6"} Mar 14 10:20:03 crc kubenswrapper[4869]: I0314 10:20:03.437313 4869 generic.go:334] "Generic (PLEG): container finished" podID="1442f731-fd67-4328-a0e3-da23c33e97d7" containerID="a3dbc60fd5dfc05d7b857cd76795539e69cd81a7354daf05c8803727bef115f7" exitCode=0 Mar 14 10:20:03 crc kubenswrapper[4869]: I0314 10:20:03.437477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-hm855" event={"ID":"1442f731-fd67-4328-a0e3-da23c33e97d7","Type":"ContainerDied","Data":"a3dbc60fd5dfc05d7b857cd76795539e69cd81a7354daf05c8803727bef115f7"} Mar 14 10:20:04 crc kubenswrapper[4869]: I0314 10:20:04.793810 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:04 crc kubenswrapper[4869]: I0314 10:20:04.881808 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grfht\" (UniqueName: \"kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht\") pod \"1442f731-fd67-4328-a0e3-da23c33e97d7\" (UID: \"1442f731-fd67-4328-a0e3-da23c33e97d7\") " Mar 14 10:20:04 crc kubenswrapper[4869]: I0314 10:20:04.887589 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht" (OuterVolumeSpecName: "kube-api-access-grfht") pod "1442f731-fd67-4328-a0e3-da23c33e97d7" (UID: "1442f731-fd67-4328-a0e3-da23c33e97d7"). InnerVolumeSpecName "kube-api-access-grfht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:20:04 crc kubenswrapper[4869]: I0314 10:20:04.983747 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grfht\" (UniqueName: \"kubernetes.io/projected/1442f731-fd67-4328-a0e3-da23c33e97d7-kube-api-access-grfht\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:05 crc kubenswrapper[4869]: I0314 10:20:05.467584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558060-hm855" event={"ID":"1442f731-fd67-4328-a0e3-da23c33e97d7","Type":"ContainerDied","Data":"4595829bd0d0b2dfd0fcbb08175b77c087d6e26e91d2e6d9e31bfee67f867ce6"} Mar 14 10:20:05 crc kubenswrapper[4869]: I0314 10:20:05.467825 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4595829bd0d0b2dfd0fcbb08175b77c087d6e26e91d2e6d9e31bfee67f867ce6" Mar 14 10:20:05 crc kubenswrapper[4869]: I0314 10:20:05.467654 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558060-hm855" Mar 14 10:20:05 crc kubenswrapper[4869]: I0314 10:20:05.862718 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-sbbk8"] Mar 14 10:20:05 crc kubenswrapper[4869]: I0314 10:20:05.874266 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558054-sbbk8"] Mar 14 10:20:07 crc kubenswrapper[4869]: I0314 10:20:07.719614 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1c3c30-9ad4-43c9-bafa-e559384d56c9" path="/var/lib/kubelet/pods/eb1c3c30-9ad4-43c9-bafa-e559384d56c9/volumes" Mar 14 10:20:09 crc kubenswrapper[4869]: I0314 10:20:09.090856 4869 scope.go:117] "RemoveContainer" containerID="f376515417a4ac6dd70bd63ce832baad3e9033f85efd569119684eb773348b01" Mar 14 10:20:10 crc kubenswrapper[4869]: I0314 10:20:10.705288 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:20:10 crc kubenswrapper[4869]: E0314 10:20:10.705797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:20:15 crc kubenswrapper[4869]: I0314 10:20:15.705603 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:20:15 crc kubenswrapper[4869]: E0314 10:20:15.707434 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.169771 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/util/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.364004 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/util/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.389853 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/pull/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.432919 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/pull/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.562828 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/util/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.588202 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/pull/0.log" Mar 14 10:20:21 crc kubenswrapper[4869]: I0314 10:20:21.636450 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_62d0bed58019ccc1d3626e895c0dbede7faaf3676e025b9f97c0e0a616fmn8x_ad72d067-5a30-4464-8a54-bdc074e552ba/extract/0.log" Mar 14 10:20:22 crc kubenswrapper[4869]: I0314 10:20:22.003321 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9c8c85cd7-5xpwd_68b90df0-f51f-4365-b2e0-96731de5afe3/manager/0.log" Mar 14 10:20:22 crc kubenswrapper[4869]: I0314 10:20:22.325574 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-74d565fbd5-c5g8t_f7e53cd1-216d-4b42-ad83-9d1098cc888b/manager/0.log" Mar 14 10:20:22 crc kubenswrapper[4869]: I0314 10:20:22.523714 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d6bd468b-nwggm_3259cee4-085a-4ba7-a3f3-117165a3b966/manager/0.log" Mar 14 10:20:22 crc kubenswrapper[4869]: I0314 10:20:22.758043 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9475cdd7-hb4t9_9ab0ae56-f1a8-473a-894f-00af6c8d174b/manager/0.log" Mar 14 10:20:23 crc kubenswrapper[4869]: I0314 10:20:23.236627 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-bf6b7fd8c-q966w_3ea49362-1a35-4a1d-8bc4-1a34041ef967/manager/0.log" Mar 14 10:20:23 crc kubenswrapper[4869]: I0314 10:20:23.466451 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-fbfb5bd65-ncnch_a0c504b4-c098-4ce0-930e-289770c5113f/manager/0.log" Mar 14 10:20:23 crc kubenswrapper[4869]: I0314 10:20:23.744215 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-68f8d496f8-zj8hh_ccdbbe0b-04ef-4da0-b0c9-7a61570fd38c/manager/0.log" Mar 14 10:20:23 crc kubenswrapper[4869]: I0314 10:20:23.918024 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6f6f57b9b6-hd7c8_2f14a802-394d-4f62-a2aa-f5a2595c520e/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.141482 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-744456f686-bz5rc_848518af-f0df-41f4-b0b6-e38b2e1df95b/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.318979 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-cb6d66846-g9rf5_3f340508-914a-4a30-8ba8-2fdafac3f865/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.436489 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-645c9f6488-p4vnd_779acd04-3c3b-4b59-8a41-b54250cfb2cb/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.696737 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-58ff56fcc7-n9qfr_d521dfe5-1037-4df9-a34b-5996da959160/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.704184 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:20:24 crc kubenswrapper[4869]: E0314 10:20:24.704361 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.709994 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7cf9f49d6-6pr99_09c23762-07cd-45d1-97ce-dc91ffebacfc/manager/0.log" Mar 14 10:20:24 crc kubenswrapper[4869]: I0314 10:20:24.874206 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-p85n5_4afcee0e-ed99-4df2-b68d-ba86e8dedacc/manager/0.log" Mar 14 10:20:25 crc kubenswrapper[4869]: I0314 10:20:25.209775 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6ccbf6d758-dckvn_580d9d1b-c740-4d28-b208-99a9ba7cd2ff/operator/0.log" Mar 14 10:20:25 crc kubenswrapper[4869]: I0314 10:20:25.498325 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z7h7s_a9d728d8-cd35-45aa-8d07-9b868dc8b137/registry-server/0.log" Mar 14 10:20:25 crc kubenswrapper[4869]: I0314 10:20:25.802726 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-b5c469fd-2hff7_32265d81-a0fb-47e8-9cab-d88245cade72/manager/0.log" Mar 14 10:20:25 crc kubenswrapper[4869]: I0314 10:20:25.821440 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-848d74f969-xt747_6086aaa8-fd6f-4e48-bc77-1b5fad163e38/manager/0.log" Mar 14 10:20:26 crc kubenswrapper[4869]: I0314 10:20:26.398476 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zjmd5_e7ce5477-6d00-4d1b-a1c1-c244ac7e3c52/operator/0.log" Mar 14 10:20:26 crc kubenswrapper[4869]: I0314 10:20:26.504526 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59b5586c67-f56l9_4a5b98d8-17c9-4d94-a61a-2c500a234d2e/manager/0.log" Mar 14 10:20:26 crc kubenswrapper[4869]: I0314 10:20:26.710366 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7f7469dbc6-msr64_12ffac0c-6749-4576-8bdf-f2eb432a6373/manager/0.log" Mar 14 10:20:26 crc kubenswrapper[4869]: I0314 10:20:26.822065 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-gplwr_3961ac22-8919-4b7a-8b44-64c1c5d9e1be/manager/0.log" Mar 14 10:20:26 crc kubenswrapper[4869]: I0314 10:20:26.889207 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6646df7cdb-7lbq5_451a50a4-ee48-4f61-9c05-514ce3897ffa/manager/0.log" Mar 14 10:20:27 crc kubenswrapper[4869]: I0314 10:20:27.070893 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cc8dbcb54-9rqrs_3b368982-02ed-44bb-bba7-9e707d2e4fbf/manager/0.log" Mar 14 10:20:30 crc kubenswrapper[4869]: I0314 10:20:30.703805 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:20:30 crc kubenswrapper[4869]: E0314 10:20:30.704581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:20:32 crc kubenswrapper[4869]: I0314 10:20:32.538071 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64768694d-fjdmg_650e636f-cd1b-4f5b-814d-076980bd8141/manager/0.log" Mar 14 10:20:39 crc kubenswrapper[4869]: I0314 10:20:39.704951 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:20:39 crc kubenswrapper[4869]: E0314 10:20:39.707219 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.436076 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:44 crc kubenswrapper[4869]: E0314 10:20:44.437249 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1442f731-fd67-4328-a0e3-da23c33e97d7" containerName="oc" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.437350 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1442f731-fd67-4328-a0e3-da23c33e97d7" containerName="oc" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.437663 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1442f731-fd67-4328-a0e3-da23c33e97d7" containerName="oc" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.444348 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.480796 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.527891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78h87\" (UniqueName: \"kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.528034 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.528535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.630156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.630283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.630312 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78h87\" (UniqueName: \"kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.630799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.630967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.650342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78h87\" (UniqueName: \"kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87\") pod \"community-operators-lt6r5\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:44 crc kubenswrapper[4869]: I0314 10:20:44.788518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:45 crc kubenswrapper[4869]: I0314 10:20:45.307806 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:45 crc kubenswrapper[4869]: I0314 10:20:45.704633 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:20:45 crc kubenswrapper[4869]: E0314 10:20:45.705003 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:20:45 crc kubenswrapper[4869]: I0314 10:20:45.884192 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerID="05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec" exitCode=0 Mar 14 10:20:45 crc kubenswrapper[4869]: I0314 10:20:45.884295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerDied","Data":"05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec"} Mar 14 10:20:45 crc kubenswrapper[4869]: I0314 10:20:45.884576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerStarted","Data":"03087cfbd8b29250c9688165982ed91975a47262580287652204ed8ffba04ddb"} Mar 14 10:20:46 crc kubenswrapper[4869]: I0314 10:20:46.899968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerStarted","Data":"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430"} Mar 14 10:20:47 crc kubenswrapper[4869]: I0314 10:20:47.912167 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerID="4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430" exitCode=0 Mar 14 10:20:47 crc kubenswrapper[4869]: I0314 10:20:47.912453 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerDied","Data":"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430"} Mar 14 10:20:47 crc kubenswrapper[4869]: I0314 10:20:47.912478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerStarted","Data":"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2"} Mar 14 10:20:47 crc kubenswrapper[4869]: I0314 10:20:47.935618 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lt6r5" podStartSLOduration=2.372897082 podStartE2EDuration="3.935602087s" podCreationTimestamp="2026-03-14 10:20:44 +0000 UTC" firstStartedPulling="2026-03-14 10:20:45.887083446 +0000 UTC m=+4998.859365509" lastFinishedPulling="2026-03-14 10:20:47.449788461 +0000 UTC m=+5000.422070514" observedRunningTime="2026-03-14 10:20:47.933045994 +0000 UTC m=+5000.905328057" watchObservedRunningTime="2026-03-14 10:20:47.935602087 +0000 UTC m=+5000.907884130" Mar 14 10:20:51 crc kubenswrapper[4869]: I0314 10:20:51.374219 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kz767_d38bd56c-2d8f-4c9f-a790-07b7bf8d8a09/control-plane-machine-set-operator/0.log" Mar 14 10:20:51 crc kubenswrapper[4869]: I0314 10:20:51.506398 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wx229_c1683ba1-6f04-40b1-b605-1ca997a00d59/kube-rbac-proxy/0.log" Mar 14 10:20:51 crc kubenswrapper[4869]: I0314 10:20:51.541068 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wx229_c1683ba1-6f04-40b1-b605-1ca997a00d59/machine-api-operator/0.log" Mar 14 10:20:51 crc kubenswrapper[4869]: I0314 10:20:51.704469 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:20:51 crc kubenswrapper[4869]: E0314 10:20:51.705094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:20:54 crc kubenswrapper[4869]: I0314 10:20:54.789172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:54 crc kubenswrapper[4869]: I0314 10:20:54.789527 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:54 crc kubenswrapper[4869]: I0314 10:20:54.863405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:55 crc kubenswrapper[4869]: I0314 10:20:55.068541 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:55 crc kubenswrapper[4869]: I0314 10:20:55.129232 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:56 crc kubenswrapper[4869]: I0314 10:20:56.996997 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lt6r5" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="registry-server" containerID="cri-o://94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2" gracePeriod=2 Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.480970 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.646947 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78h87\" (UniqueName: \"kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87\") pod \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.646997 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities\") pod \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.647254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content\") pod \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\" (UID: \"aa8c64ac-1664-4e4c-a122-0bfcf4988690\") " Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.647789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities" (OuterVolumeSpecName: "utilities") pod "aa8c64ac-1664-4e4c-a122-0bfcf4988690" (UID: "aa8c64ac-1664-4e4c-a122-0bfcf4988690"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.657762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87" (OuterVolumeSpecName: "kube-api-access-78h87") pod "aa8c64ac-1664-4e4c-a122-0bfcf4988690" (UID: "aa8c64ac-1664-4e4c-a122-0bfcf4988690"). InnerVolumeSpecName "kube-api-access-78h87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.711686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa8c64ac-1664-4e4c-a122-0bfcf4988690" (UID: "aa8c64ac-1664-4e4c-a122-0bfcf4988690"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.749877 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.749903 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78h87\" (UniqueName: \"kubernetes.io/projected/aa8c64ac-1664-4e4c-a122-0bfcf4988690-kube-api-access-78h87\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:57 crc kubenswrapper[4869]: I0314 10:20:57.749914 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8c64ac-1664-4e4c-a122-0bfcf4988690-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.008896 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerID="94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2" exitCode=0 Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.008946 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerDied","Data":"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2"} Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.009007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lt6r5" event={"ID":"aa8c64ac-1664-4e4c-a122-0bfcf4988690","Type":"ContainerDied","Data":"03087cfbd8b29250c9688165982ed91975a47262580287652204ed8ffba04ddb"} Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.009027 4869 scope.go:117] "RemoveContainer" containerID="94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.009023 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lt6r5" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.043615 4869 scope.go:117] "RemoveContainer" containerID="4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.050996 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.060468 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lt6r5"] Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.074139 4869 scope.go:117] "RemoveContainer" containerID="05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.130898 4869 scope.go:117] "RemoveContainer" containerID="94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2" Mar 14 10:20:58 crc kubenswrapper[4869]: E0314 10:20:58.131951 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2\": container with ID starting with 94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2 not found: ID does not exist" containerID="94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.132010 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2"} err="failed to get container status \"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2\": rpc error: code = NotFound desc = could not find container \"94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2\": container with ID starting with 94039c390e798fbe534815fc134305117ab4136fb5fd9b20aad0c7e2a95c3fe2 not found: ID does not exist" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.132044 4869 scope.go:117] "RemoveContainer" containerID="4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430" Mar 14 10:20:58 crc kubenswrapper[4869]: E0314 10:20:58.132418 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430\": container with ID starting with 4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430 not found: ID does not exist" containerID="4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.132469 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430"} err="failed to get container status \"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430\": rpc error: code = NotFound desc = could not find container \"4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430\": container with ID starting with 4f567e17ffc65f02fd5f6059f91bd241ac5f1d51c5965743656562eb051ce430 not found: ID does not exist" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.132504 4869 scope.go:117] "RemoveContainer" containerID="05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec" Mar 14 10:20:58 crc kubenswrapper[4869]: E0314 10:20:58.132978 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec\": container with ID starting with 05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec not found: ID does not exist" containerID="05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec" Mar 14 10:20:58 crc kubenswrapper[4869]: I0314 10:20:58.133009 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec"} err="failed to get container status \"05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec\": rpc error: code = NotFound desc = could not find container \"05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec\": container with ID starting with 05a67b11ff9c1d88bb495d3e61ed43be37daf576c86cd6a46b1aab8459ba53ec not found: ID does not exist" Mar 14 10:20:59 crc kubenswrapper[4869]: I0314 10:20:59.704732 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:20:59 crc kubenswrapper[4869]: E0314 10:20:59.706303 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:20:59 crc kubenswrapper[4869]: I0314 10:20:59.721202 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" path="/var/lib/kubelet/pods/aa8c64ac-1664-4e4c-a122-0bfcf4988690/volumes" Mar 14 10:21:06 crc kubenswrapper[4869]: I0314 10:21:06.207234 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mqzbs_dc806080-ac5e-4802-9e6f-eca4be72ab49/cert-manager-controller/0.log" Mar 14 10:21:06 crc kubenswrapper[4869]: I0314 10:21:06.365252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-hjngc_fabb3acb-23e6-49d7-a021-3c72273147a6/cert-manager-cainjector/0.log" Mar 14 10:21:06 crc kubenswrapper[4869]: I0314 10:21:06.411926 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4n6nc_b296f20d-7a2e-4515-9881-d00fe5f3c5ba/cert-manager-webhook/0.log" Mar 14 10:21:06 crc kubenswrapper[4869]: I0314 10:21:06.703954 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:21:06 crc kubenswrapper[4869]: E0314 10:21:06.704223 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:21:12 crc kubenswrapper[4869]: I0314 10:21:12.703783 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:21:12 crc kubenswrapper[4869]: E0314 10:21:12.705758 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:21:21 crc kubenswrapper[4869]: I0314 10:21:21.704394 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:21:21 crc kubenswrapper[4869]: E0314 10:21:21.705267 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:21:23 crc kubenswrapper[4869]: I0314 10:21:23.635845 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-965cd_a7464d00-e0bb-4ff7-9d53-023ea540cf6b/nmstate-handler/0.log" Mar 14 10:21:23 crc kubenswrapper[4869]: I0314 10:21:23.640216 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-xrxbl_69005533-e9a5-4d50-912f-70adb7debd05/nmstate-console-plugin/0.log" Mar 14 10:21:23 crc kubenswrapper[4869]: I0314 10:21:23.816287 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-pjm8s_7def1104-dc9a-43ed-9c74-744352ed80cb/kube-rbac-proxy/0.log" Mar 14 10:21:23 crc kubenswrapper[4869]: I0314 10:21:23.816788 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-pjm8s_7def1104-dc9a-43ed-9c74-744352ed80cb/nmstate-metrics/0.log" Mar 14 10:21:24 crc kubenswrapper[4869]: I0314 10:21:24.059445 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-244rz_d6ab5eba-10b6-4553-a185-c9fee70073c0/nmstate-webhook/0.log" Mar 14 10:21:24 crc kubenswrapper[4869]: I0314 10:21:24.067985 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-n59gq_a4c258ce-f170-4d41-81c7-8baff94d2db9/nmstate-operator/0.log" Mar 14 10:21:26 crc kubenswrapper[4869]: I0314 10:21:26.712946 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:21:26 crc kubenswrapper[4869]: E0314 10:21:26.716777 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:21:35 crc kubenswrapper[4869]: I0314 10:21:35.707278 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:21:35 crc kubenswrapper[4869]: E0314 10:21:35.708273 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:21:39 crc kubenswrapper[4869]: I0314 10:21:39.605224 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:21:39 crc kubenswrapper[4869]: I0314 10:21:39.605770 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:21:40 crc kubenswrapper[4869]: I0314 10:21:40.703650 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:21:40 crc kubenswrapper[4869]: E0314 10:21:40.704025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:21:41 crc kubenswrapper[4869]: I0314 10:21:41.596785 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zc9kg_3dca19a5-2b14-442a-b257-8fdd673d7a23/prometheus-operator/0.log" Mar 14 10:21:41 crc kubenswrapper[4869]: I0314 10:21:41.813700 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-95768cd78-pfxf9_a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda/prometheus-operator-admission-webhook/0.log" Mar 14 10:21:41 crc kubenswrapper[4869]: I0314 10:21:41.814220 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-95768cd78-9xsl8_fd7586a9-5944-496e-95a7-c62cacd45de7/prometheus-operator-admission-webhook/0.log" Mar 14 10:21:41 crc kubenswrapper[4869]: I0314 10:21:41.964975 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-w2q6b_31fe446c-71c8-4715-988d-513ec60bb444/operator/0.log" Mar 14 10:21:42 crc kubenswrapper[4869]: I0314 10:21:42.014614 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wjfwj_bae7e494-3d8d-4c79-be70-40c1013b81c2/perses-operator/0.log" Mar 14 10:21:49 crc kubenswrapper[4869]: I0314 10:21:49.708410 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:21:49 crc kubenswrapper[4869]: E0314 10:21:49.709276 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:21:51 crc kubenswrapper[4869]: I0314 10:21:51.703990 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:21:51 crc kubenswrapper[4869]: E0314 10:21:51.704445 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.102459 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-kctvk_71df09af-93c6-48ff-b88b-cb91b0649482/kube-rbac-proxy/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.196435 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-kctvk_71df09af-93c6-48ff-b88b-cb91b0649482/controller/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.301310 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-frr-files/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.494667 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-frr-files/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.511878 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-metrics/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.527015 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-reloader/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.723770 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-reloader/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.860526 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-frr-files/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.866603 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-reloader/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.898892 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-metrics/0.log" Mar 14 10:21:59 crc kubenswrapper[4869]: I0314 10:21:59.923331 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-metrics/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.079873 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-frr-files/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.137817 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558062-lmm8z"] Mar 14 10:22:00 crc kubenswrapper[4869]: E0314 10:22:00.138166 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.138178 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="extract-utilities" Mar 14 10:22:00 crc kubenswrapper[4869]: E0314 10:22:00.138217 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.138223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="extract-content" Mar 14 10:22:00 crc kubenswrapper[4869]: E0314 10:22:00.138240 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.138246 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.138422 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8c64ac-1664-4e4c-a122-0bfcf4988690" containerName="registry-server" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.139044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.140480 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/controller/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.141059 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.141575 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.155112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.157295 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-reloader/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.157389 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558062-lmm8z"] Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.169284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/cp-metrics/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.280734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7f2f\" (UniqueName: \"kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f\") pod \"auto-csr-approver-29558062-lmm8z\" (UID: \"27db24a4-d7ff-4b41-9358-805605459feb\") " pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.378464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/frr-metrics/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.382300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7f2f\" (UniqueName: \"kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f\") pod \"auto-csr-approver-29558062-lmm8z\" (UID: \"27db24a4-d7ff-4b41-9358-805605459feb\") " pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.383945 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/kube-rbac-proxy/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.407086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7f2f\" (UniqueName: \"kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f\") pod \"auto-csr-approver-29558062-lmm8z\" (UID: \"27db24a4-d7ff-4b41-9358-805605459feb\") " pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.455812 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/kube-rbac-proxy-frr/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.458355 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.606741 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/reloader/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.796578 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-b6tck_c1bf896c-b7f5-4ee8-a8f3-531729f11481/frr-k8s-webhook-server/0.log" Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.922224 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558062-lmm8z"] Mar 14 10:22:00 crc kubenswrapper[4869]: I0314 10:22:00.935055 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c96b4b56d-sg8km_9c209b4c-7e2f-4f26-a356-e5d8f1fee0f0/manager/0.log" Mar 14 10:22:01 crc kubenswrapper[4869]: I0314 10:22:01.071701 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-74b96dc575-jpcc5_1585557f-13cc-49e6-8360-ab13426bbeb8/webhook-server/0.log" Mar 14 10:22:01 crc kubenswrapper[4869]: I0314 10:22:01.285799 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-px22f_194fa38d-a339-4883-bc71-3601aa7441b3/kube-rbac-proxy/0.log" Mar 14 10:22:01 crc kubenswrapper[4869]: I0314 10:22:01.624893 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" event={"ID":"27db24a4-d7ff-4b41-9358-805605459feb","Type":"ContainerStarted","Data":"a3ce1a94adce346d52370c443d463d5f6420e5d591efc7229063f73800c2c4d8"} Mar 14 10:22:01 crc kubenswrapper[4869]: I0314 10:22:01.757542 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-px22f_194fa38d-a339-4883-bc71-3601aa7441b3/speaker/0.log" Mar 14 10:22:02 crc kubenswrapper[4869]: I0314 10:22:02.029110 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4sspm_b55cd623-5f55-4111-a671-e409e6c02697/frr/0.log" Mar 14 10:22:02 crc kubenswrapper[4869]: I0314 10:22:02.634100 4869 generic.go:334] "Generic (PLEG): container finished" podID="27db24a4-d7ff-4b41-9358-805605459feb" containerID="0dd3d22c03848f0495fa96371d3e1df228116cc5cc608b7f3e1d9b0686e9ddad" exitCode=0 Mar 14 10:22:02 crc kubenswrapper[4869]: I0314 10:22:02.634152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" event={"ID":"27db24a4-d7ff-4b41-9358-805605459feb","Type":"ContainerDied","Data":"0dd3d22c03848f0495fa96371d3e1df228116cc5cc608b7f3e1d9b0686e9ddad"} Mar 14 10:22:03 crc kubenswrapper[4869]: I0314 10:22:03.704118 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:22:03 crc kubenswrapper[4869]: E0314 10:22:03.704455 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.033452 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.156203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7f2f\" (UniqueName: \"kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f\") pod \"27db24a4-d7ff-4b41-9358-805605459feb\" (UID: \"27db24a4-d7ff-4b41-9358-805605459feb\") " Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.165817 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f" (OuterVolumeSpecName: "kube-api-access-j7f2f") pod "27db24a4-d7ff-4b41-9358-805605459feb" (UID: "27db24a4-d7ff-4b41-9358-805605459feb"). InnerVolumeSpecName "kube-api-access-j7f2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.258460 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7f2f\" (UniqueName: \"kubernetes.io/projected/27db24a4-d7ff-4b41-9358-805605459feb-kube-api-access-j7f2f\") on node \"crc\" DevicePath \"\"" Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.663583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" event={"ID":"27db24a4-d7ff-4b41-9358-805605459feb","Type":"ContainerDied","Data":"a3ce1a94adce346d52370c443d463d5f6420e5d591efc7229063f73800c2c4d8"} Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.663622 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ce1a94adce346d52370c443d463d5f6420e5d591efc7229063f73800c2c4d8" Mar 14 10:22:04 crc kubenswrapper[4869]: I0314 10:22:04.663634 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558062-lmm8z" Mar 14 10:22:05 crc kubenswrapper[4869]: I0314 10:22:05.126215 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-pwpsf"] Mar 14 10:22:05 crc kubenswrapper[4869]: I0314 10:22:05.138209 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558056-pwpsf"] Mar 14 10:22:05 crc kubenswrapper[4869]: I0314 10:22:05.704653 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:22:05 crc kubenswrapper[4869]: E0314 10:22:05.705240 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:22:05 crc kubenswrapper[4869]: I0314 10:22:05.722734 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528d6b75-4067-4f1f-8585-6fe161aea0b4" path="/var/lib/kubelet/pods/528d6b75-4067-4f1f-8585-6fe161aea0b4/volumes" Mar 14 10:22:09 crc kubenswrapper[4869]: I0314 10:22:09.218340 4869 scope.go:117] "RemoveContainer" containerID="cb1c90a96870002a60661180175eaf88231319cb9cd659a373076ce4c93e00fa" Mar 14 10:22:09 crc kubenswrapper[4869]: I0314 10:22:09.605462 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:22:09 crc kubenswrapper[4869]: I0314 10:22:09.605539 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:22:17 crc kubenswrapper[4869]: I0314 10:22:17.713570 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:22:17 crc kubenswrapper[4869]: E0314 10:22:17.714434 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.171045 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/util/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.378661 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/util/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.418954 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/pull/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.423688 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/pull/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.635543 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/pull/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.705811 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/extract/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.707946 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8742jqt7_3cf8965f-4dc4-402b-91ab-415c90cde24e/util/0.log" Mar 14 10:22:18 crc kubenswrapper[4869]: I0314 10:22:18.860545 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.026087 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.043484 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.072428 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.161357 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.202276 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.216929 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c18tjts_3ddf1a82-4f87-475c-895b-23cfe6ed443c/extract/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.327180 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.574181 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.584230 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.613042 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.705357 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:22:19 crc kubenswrapper[4869]: E0314 10:22:19.705599 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.777028 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/util/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.847604 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/extract/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.861678 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kpnbs_698f4362-610d-4426-a6da-e569295eedfd/pull/0.log" Mar 14 10:22:19 crc kubenswrapper[4869]: I0314 10:22:19.986173 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-utilities/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.151565 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-content/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.179195 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-utilities/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.214223 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-content/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.338284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-utilities/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.375255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/extract-content/0.log" Mar 14 10:22:20 crc kubenswrapper[4869]: I0314 10:22:20.845801 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-utilities/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.016202 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-utilities/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.027050 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4d9kq_1fb58f92-8606-4713-b0ea-ff91ddcca450/registry-server/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.045444 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-content/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.078226 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-content/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.225073 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-utilities/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.290183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/extract-content/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.483116 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-9twg2_0d2388b0-415d-43ea-9d85-a417297abc29/marketplace-operator/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.575206 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-utilities/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.823658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-utilities/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.845719 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-content/0.log" Mar 14 10:22:21 crc kubenswrapper[4869]: I0314 10:22:21.846919 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-content/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.019052 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp4fw_690e1277-d006-4116-a019-5a0c9d2aef19/registry-server/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.116245 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-content/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.171964 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/extract-utilities/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.269194 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-utilities/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.295422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nnnql_5b63d540-c356-43fe-bf6a-c1f8aad19156/registry-server/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.401220 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-utilities/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.426586 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-content/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.434271 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-content/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.550135 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-content/0.log" Mar 14 10:22:22 crc kubenswrapper[4869]: I0314 10:22:22.569075 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/extract-utilities/0.log" Mar 14 10:22:23 crc kubenswrapper[4869]: I0314 10:22:23.093317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qc2w7_f2e2177a-92e8-4d4d-bd3c-429dbfcc2db9/registry-server/0.log" Mar 14 10:22:30 crc kubenswrapper[4869]: I0314 10:22:30.703691 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:22:30 crc kubenswrapper[4869]: E0314 10:22:30.704522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:22:33 crc kubenswrapper[4869]: I0314 10:22:33.704103 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:22:33 crc kubenswrapper[4869]: E0314 10:22:33.705135 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:22:37 crc kubenswrapper[4869]: I0314 10:22:37.043670 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-95768cd78-9xsl8_fd7586a9-5944-496e-95a7-c62cacd45de7/prometheus-operator-admission-webhook/0.log" Mar 14 10:22:37 crc kubenswrapper[4869]: I0314 10:22:37.085984 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zc9kg_3dca19a5-2b14-442a-b257-8fdd673d7a23/prometheus-operator/0.log" Mar 14 10:22:37 crc kubenswrapper[4869]: I0314 10:22:37.092594 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-95768cd78-pfxf9_a2ed2de8-a46d-4d93-b2b9-497d4d3a8dda/prometheus-operator-admission-webhook/0.log" Mar 14 10:22:37 crc kubenswrapper[4869]: I0314 10:22:37.267716 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-wjfwj_bae7e494-3d8d-4c79-be70-40c1013b81c2/perses-operator/0.log" Mar 14 10:22:37 crc kubenswrapper[4869]: I0314 10:22:37.309366 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-w2q6b_31fe446c-71c8-4715-988d-513ec60bb444/operator/0.log" Mar 14 10:22:39 crc kubenswrapper[4869]: I0314 10:22:39.605762 4869 patch_prober.go:28] interesting pod/machine-config-daemon-jj985 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 14 10:22:39 crc kubenswrapper[4869]: I0314 10:22:39.606228 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 14 10:22:39 crc kubenswrapper[4869]: I0314 10:22:39.606294 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jj985" Mar 14 10:22:39 crc kubenswrapper[4869]: I0314 10:22:39.607611 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6"} pod="openshift-machine-config-operator/machine-config-daemon-jj985" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 14 10:22:39 crc kubenswrapper[4869]: I0314 10:22:39.607801 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerName="machine-config-daemon" containerID="cri-o://9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" gracePeriod=600 Mar 14 10:22:40 crc kubenswrapper[4869]: E0314 10:22:40.718435 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:22:41 crc kubenswrapper[4869]: I0314 10:22:41.048388 4869 generic.go:334] "Generic (PLEG): container finished" podID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" exitCode=0 Mar 14 10:22:41 crc kubenswrapper[4869]: I0314 10:22:41.048434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jj985" event={"ID":"e08d1ace-1d27-4a7d-b08e-c245a103c56f","Type":"ContainerDied","Data":"9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6"} Mar 14 10:22:41 crc kubenswrapper[4869]: I0314 10:22:41.048469 4869 scope.go:117] "RemoveContainer" containerID="06387255575a0dc8979b135fc7d4a0acb46b9ddb64985eb1e9bd5653179d10ba" Mar 14 10:22:41 crc kubenswrapper[4869]: I0314 10:22:41.049224 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:22:41 crc kubenswrapper[4869]: E0314 10:22:41.049622 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:22:42 crc kubenswrapper[4869]: I0314 10:22:42.703196 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:22:42 crc kubenswrapper[4869]: E0314 10:22:42.703601 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:22:45 crc kubenswrapper[4869]: I0314 10:22:45.703845 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:22:45 crc kubenswrapper[4869]: E0314 10:22:45.704360 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:22:53 crc kubenswrapper[4869]: I0314 10:22:53.706525 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:22:53 crc kubenswrapper[4869]: E0314 10:22:53.707117 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:22:56 crc kubenswrapper[4869]: I0314 10:22:56.707534 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:22:56 crc kubenswrapper[4869]: E0314 10:22:56.708437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:23:00 crc kubenswrapper[4869]: I0314 10:23:00.704350 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:23:00 crc kubenswrapper[4869]: E0314 10:23:00.705218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:23:04 crc kubenswrapper[4869]: I0314 10:23:04.703894 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:23:04 crc kubenswrapper[4869]: E0314 10:23:04.705155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:23:07 crc kubenswrapper[4869]: I0314 10:23:07.712605 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:23:07 crc kubenswrapper[4869]: E0314 10:23:07.713293 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:23:14 crc kubenswrapper[4869]: I0314 10:23:14.705063 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:23:14 crc kubenswrapper[4869]: E0314 10:23:14.706005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:23:17 crc kubenswrapper[4869]: I0314 10:23:17.716889 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:23:17 crc kubenswrapper[4869]: E0314 10:23:17.718033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:23:21 crc kubenswrapper[4869]: I0314 10:23:21.707711 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:23:21 crc kubenswrapper[4869]: E0314 10:23:21.708897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:23:28 crc kubenswrapper[4869]: I0314 10:23:28.705388 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:23:28 crc kubenswrapper[4869]: E0314 10:23:28.706131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:23:32 crc kubenswrapper[4869]: I0314 10:23:32.704310 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:23:32 crc kubenswrapper[4869]: E0314 10:23:32.705764 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:23:33 crc kubenswrapper[4869]: I0314 10:23:33.713809 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:23:33 crc kubenswrapper[4869]: E0314 10:23:33.714625 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:23:41 crc kubenswrapper[4869]: I0314 10:23:41.703653 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:23:41 crc kubenswrapper[4869]: E0314 10:23:41.704350 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:23:44 crc kubenswrapper[4869]: I0314 10:23:44.703904 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:23:44 crc kubenswrapper[4869]: E0314 10:23:44.704720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:23:48 crc kubenswrapper[4869]: I0314 10:23:48.705380 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:23:48 crc kubenswrapper[4869]: E0314 10:23:48.706476 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:23:53 crc kubenswrapper[4869]: I0314 10:23:53.705618 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:23:53 crc kubenswrapper[4869]: E0314 10:23:53.706563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:23:59 crc kubenswrapper[4869]: I0314 10:23:59.703920 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.177691 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558064-phlv9"] Mar 14 10:24:00 crc kubenswrapper[4869]: E0314 10:24:00.178496 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27db24a4-d7ff-4b41-9358-805605459feb" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.178561 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="27db24a4-d7ff-4b41-9358-805605459feb" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.178863 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="27db24a4-d7ff-4b41-9358-805605459feb" containerName="oc" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.179709 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.182700 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.182812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.182920 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.191140 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558064-phlv9"] Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.263877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhvf\" (UniqueName: \"kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf\") pod \"auto-csr-approver-29558064-phlv9\" (UID: \"826b962e-912c-4dcc-b946-2da03717936a\") " pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.365782 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhvf\" (UniqueName: \"kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf\") pod \"auto-csr-approver-29558064-phlv9\" (UID: \"826b962e-912c-4dcc-b946-2da03717936a\") " pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.383275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhvf\" (UniqueName: \"kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf\") pod \"auto-csr-approver-29558064-phlv9\" (UID: \"826b962e-912c-4dcc-b946-2da03717936a\") " pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.512479 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.704227 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:24:00 crc kubenswrapper[4869]: E0314 10:24:00.704767 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.980562 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558064-phlv9"] Mar 14 10:24:00 crc kubenswrapper[4869]: I0314 10:24:00.985924 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 14 10:24:01 crc kubenswrapper[4869]: I0314 10:24:01.136633 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-phlv9" event={"ID":"826b962e-912c-4dcc-b946-2da03717936a","Type":"ContainerStarted","Data":"757939a607db5fe135ccce7af9d1949e3c7307ba54215683c6c5475c75f71686"} Mar 14 10:24:01 crc kubenswrapper[4869]: I0314 10:24:01.139715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerStarted","Data":"6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b"} Mar 14 10:24:03 crc kubenswrapper[4869]: I0314 10:24:03.158061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-phlv9" event={"ID":"826b962e-912c-4dcc-b946-2da03717936a","Type":"ContainerStarted","Data":"18eeed1e378422ec1db3a85ec4e50e63b484a98cb380bc3ec680e8f62bef31ac"} Mar 14 10:24:03 crc kubenswrapper[4869]: I0314 10:24:03.184994 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29558064-phlv9" podStartSLOduration=2.211291082 podStartE2EDuration="3.184975919s" podCreationTimestamp="2026-03-14 10:24:00 +0000 UTC" firstStartedPulling="2026-03-14 10:24:00.983980406 +0000 UTC m=+5193.956262469" lastFinishedPulling="2026-03-14 10:24:01.957665263 +0000 UTC m=+5194.929947306" observedRunningTime="2026-03-14 10:24:03.175910716 +0000 UTC m=+5196.148192809" watchObservedRunningTime="2026-03-14 10:24:03.184975919 +0000 UTC m=+5196.157258002" Mar 14 10:24:04 crc kubenswrapper[4869]: I0314 10:24:04.168706 4869 generic.go:334] "Generic (PLEG): container finished" podID="826b962e-912c-4dcc-b946-2da03717936a" containerID="18eeed1e378422ec1db3a85ec4e50e63b484a98cb380bc3ec680e8f62bef31ac" exitCode=0 Mar 14 10:24:04 crc kubenswrapper[4869]: I0314 10:24:04.168766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-phlv9" event={"ID":"826b962e-912c-4dcc-b946-2da03717936a","Type":"ContainerDied","Data":"18eeed1e378422ec1db3a85ec4e50e63b484a98cb380bc3ec680e8f62bef31ac"} Mar 14 10:24:04 crc kubenswrapper[4869]: I0314 10:24:04.539085 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:24:04 crc kubenswrapper[4869]: I0314 10:24:04.539152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:24:04 crc kubenswrapper[4869]: I0314 10:24:04.706270 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:24:04 crc kubenswrapper[4869]: E0314 10:24:04.706643 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:24:05 crc kubenswrapper[4869]: I0314 10:24:05.606691 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:05 crc kubenswrapper[4869]: I0314 10:24:05.692567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmhvf\" (UniqueName: \"kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf\") pod \"826b962e-912c-4dcc-b946-2da03717936a\" (UID: \"826b962e-912c-4dcc-b946-2da03717936a\") " Mar 14 10:24:05 crc kubenswrapper[4869]: I0314 10:24:05.712959 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf" (OuterVolumeSpecName: "kube-api-access-cmhvf") pod "826b962e-912c-4dcc-b946-2da03717936a" (UID: "826b962e-912c-4dcc-b946-2da03717936a"). InnerVolumeSpecName "kube-api-access-cmhvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:24:05 crc kubenswrapper[4869]: I0314 10:24:05.795072 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmhvf\" (UniqueName: \"kubernetes.io/projected/826b962e-912c-4dcc-b946-2da03717936a-kube-api-access-cmhvf\") on node \"crc\" DevicePath \"\"" Mar 14 10:24:06 crc kubenswrapper[4869]: I0314 10:24:06.189458 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558064-phlv9" event={"ID":"826b962e-912c-4dcc-b946-2da03717936a","Type":"ContainerDied","Data":"757939a607db5fe135ccce7af9d1949e3c7307ba54215683c6c5475c75f71686"} Mar 14 10:24:06 crc kubenswrapper[4869]: I0314 10:24:06.189495 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="757939a607db5fe135ccce7af9d1949e3c7307ba54215683c6c5475c75f71686" Mar 14 10:24:06 crc kubenswrapper[4869]: I0314 10:24:06.189535 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558064-phlv9" Mar 14 10:24:06 crc kubenswrapper[4869]: I0314 10:24:06.250821 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-t44z6"] Mar 14 10:24:06 crc kubenswrapper[4869]: I0314 10:24:06.258908 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558058-t44z6"] Mar 14 10:24:07 crc kubenswrapper[4869]: I0314 10:24:07.718264 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1" path="/var/lib/kubelet/pods/aa9a0b18-c7a4-4912-a2dc-c6075c2fd2c1/volumes" Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.219982 4869 generic.go:334] "Generic (PLEG): container finished" podID="90750956-6a92-4c2c-8213-07cd62712ba1" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" exitCode=1 Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.220066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d48d6888-26pm7" event={"ID":"90750956-6a92-4c2c-8213-07cd62712ba1","Type":"ContainerDied","Data":"6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b"} Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.220413 4869 scope.go:117] "RemoveContainer" containerID="2175a495b9a7ff7ac15861c66d13f21452fe257e33466ab02e2867fe4e38b3d7" Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.221942 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:24:08 crc kubenswrapper[4869]: E0314 10:24:08.222337 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.223484 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6c47849-4852-4379-8f28-97955656e693" containerID="a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69" exitCode=0 Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.223544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" event={"ID":"a6c47849-4852-4379-8f28-97955656e693","Type":"ContainerDied","Data":"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69"} Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.223918 4869 scope.go:117] "RemoveContainer" containerID="a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69" Mar 14 10:24:08 crc kubenswrapper[4869]: I0314 10:24:08.669295 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck6w6_must-gather-q6h4q_a6c47849-4852-4379-8f28-97955656e693/gather/0.log" Mar 14 10:24:09 crc kubenswrapper[4869]: I0314 10:24:09.320301 4869 scope.go:117] "RemoveContainer" containerID="edac00d5f6752397c0da3a18d8a7e5c23cdbae66fb70d1e951537bef6cb524ad" Mar 14 10:24:14 crc kubenswrapper[4869]: I0314 10:24:14.538725 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:24:14 crc kubenswrapper[4869]: I0314 10:24:14.539382 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d48d6888-26pm7" Mar 14 10:24:14 crc kubenswrapper[4869]: I0314 10:24:14.540277 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:24:14 crc kubenswrapper[4869]: E0314 10:24:14.540638 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:24:15 crc kubenswrapper[4869]: I0314 10:24:15.706336 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:24:15 crc kubenswrapper[4869]: E0314 10:24:15.706853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:24:16 crc kubenswrapper[4869]: I0314 10:24:16.704382 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:24:16 crc kubenswrapper[4869]: E0314 10:24:16.705180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:24:17 crc kubenswrapper[4869]: I0314 10:24:17.552216 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ck6w6/must-gather-q6h4q"] Mar 14 10:24:17 crc kubenswrapper[4869]: I0314 10:24:17.553218 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="copy" containerID="cri-o://94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd" gracePeriod=2 Mar 14 10:24:17 crc kubenswrapper[4869]: I0314 10:24:17.586604 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ck6w6/must-gather-q6h4q"] Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.156284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck6w6_must-gather-q6h4q_a6c47849-4852-4379-8f28-97955656e693/copy/0.log" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.157035 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.344691 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ck6w6_must-gather-q6h4q_a6c47849-4852-4379-8f28-97955656e693/copy/0.log" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.345328 4869 generic.go:334] "Generic (PLEG): container finished" podID="a6c47849-4852-4379-8f28-97955656e693" containerID="94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd" exitCode=143 Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.345382 4869 scope.go:117] "RemoveContainer" containerID="94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.345551 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ck6w6/must-gather-q6h4q" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.352122 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g46b\" (UniqueName: \"kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b\") pod \"a6c47849-4852-4379-8f28-97955656e693\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.352380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output\") pod \"a6c47849-4852-4379-8f28-97955656e693\" (UID: \"a6c47849-4852-4379-8f28-97955656e693\") " Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.364734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b" (OuterVolumeSpecName: "kube-api-access-9g46b") pod "a6c47849-4852-4379-8f28-97955656e693" (UID: "a6c47849-4852-4379-8f28-97955656e693"). InnerVolumeSpecName "kube-api-access-9g46b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.369013 4869 scope.go:117] "RemoveContainer" containerID="a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.454407 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g46b\" (UniqueName: \"kubernetes.io/projected/a6c47849-4852-4379-8f28-97955656e693-kube-api-access-9g46b\") on node \"crc\" DevicePath \"\"" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.513800 4869 scope.go:117] "RemoveContainer" containerID="94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.515024 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a6c47849-4852-4379-8f28-97955656e693" (UID: "a6c47849-4852-4379-8f28-97955656e693"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:24:18 crc kubenswrapper[4869]: E0314 10:24:18.517038 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd\": container with ID starting with 94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd not found: ID does not exist" containerID="94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.517092 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd"} err="failed to get container status \"94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd\": rpc error: code = NotFound desc = could not find container \"94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd\": container with ID starting with 94e4d97a4493637bb6c1f0e3275bb84a67a6fbb70db9cda722711439b728bddd not found: ID does not exist" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.517126 4869 scope.go:117] "RemoveContainer" containerID="a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69" Mar 14 10:24:18 crc kubenswrapper[4869]: E0314 10:24:18.517919 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69\": container with ID starting with a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69 not found: ID does not exist" containerID="a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.518023 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69"} err="failed to get container status \"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69\": rpc error: code = NotFound desc = could not find container \"a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69\": container with ID starting with a854aa49d0b1ac96dab1d05ad852086777229551547eab639aafbf4c6b0bea69 not found: ID does not exist" Mar 14 10:24:18 crc kubenswrapper[4869]: I0314 10:24:18.556390 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a6c47849-4852-4379-8f28-97955656e693-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 14 10:24:19 crc kubenswrapper[4869]: I0314 10:24:19.725061 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c47849-4852-4379-8f28-97955656e693" path="/var/lib/kubelet/pods/a6c47849-4852-4379-8f28-97955656e693/volumes" Mar 14 10:24:26 crc kubenswrapper[4869]: I0314 10:24:26.704746 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:24:26 crc kubenswrapper[4869]: E0314 10:24:26.707475 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:24:29 crc kubenswrapper[4869]: I0314 10:24:29.703730 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:24:29 crc kubenswrapper[4869]: E0314 10:24:29.704569 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:24:30 crc kubenswrapper[4869]: I0314 10:24:30.704452 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:24:31 crc kubenswrapper[4869]: I0314 10:24:31.494921 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerStarted","Data":"1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5"} Mar 14 10:24:34 crc kubenswrapper[4869]: I0314 10:24:34.404677 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:24:34 crc kubenswrapper[4869]: I0314 10:24:34.406862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:24:38 crc kubenswrapper[4869]: I0314 10:24:38.705214 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:24:38 crc kubenswrapper[4869]: E0314 10:24:38.706483 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:24:39 crc kubenswrapper[4869]: I0314 10:24:39.595363 4869 generic.go:334] "Generic (PLEG): container finished" podID="c776b1be-07b2-4de0-808f-48c9a550aaa4" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" exitCode=1 Mar 14 10:24:39 crc kubenswrapper[4869]: I0314 10:24:39.595576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b646449c6-8g8ql" event={"ID":"c776b1be-07b2-4de0-808f-48c9a550aaa4","Type":"ContainerDied","Data":"1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5"} Mar 14 10:24:39 crc kubenswrapper[4869]: I0314 10:24:39.595716 4869 scope.go:117] "RemoveContainer" containerID="ac590fd783ef4fd4e2b9529eaca5bdf9ab99265cce01efa54ffcb5917a8696da" Mar 14 10:24:39 crc kubenswrapper[4869]: I0314 10:24:39.596701 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:24:39 crc kubenswrapper[4869]: E0314 10:24:39.597460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:24:40 crc kubenswrapper[4869]: I0314 10:24:40.704816 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:24:40 crc kubenswrapper[4869]: E0314 10:24:40.705918 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:24:44 crc kubenswrapper[4869]: I0314 10:24:44.404399 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:24:44 crc kubenswrapper[4869]: I0314 10:24:44.404927 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b646449c6-8g8ql" Mar 14 10:24:44 crc kubenswrapper[4869]: I0314 10:24:44.405839 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:24:44 crc kubenswrapper[4869]: E0314 10:24:44.406098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:24:51 crc kubenswrapper[4869]: I0314 10:24:51.705618 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:24:51 crc kubenswrapper[4869]: E0314 10:24:51.706476 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:24:54 crc kubenswrapper[4869]: I0314 10:24:54.704479 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:24:54 crc kubenswrapper[4869]: E0314 10:24:54.705296 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:24:57 crc kubenswrapper[4869]: I0314 10:24:57.716307 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:24:57 crc kubenswrapper[4869]: E0314 10:24:57.717363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:25:03 crc kubenswrapper[4869]: I0314 10:25:03.704585 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:25:03 crc kubenswrapper[4869]: E0314 10:25:03.705575 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:25:08 crc kubenswrapper[4869]: I0314 10:25:08.704627 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:25:08 crc kubenswrapper[4869]: E0314 10:25:08.705866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:25:09 crc kubenswrapper[4869]: I0314 10:25:09.399986 4869 scope.go:117] "RemoveContainer" containerID="aac200f64dd5079663d660cb446d48fd8f2e90a7139d8424bd520f00495c78c8" Mar 14 10:25:09 crc kubenswrapper[4869]: I0314 10:25:09.704780 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:25:09 crc kubenswrapper[4869]: E0314 10:25:09.705853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:25:14 crc kubenswrapper[4869]: I0314 10:25:14.704370 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:25:14 crc kubenswrapper[4869]: E0314 10:25:14.705307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:25:21 crc kubenswrapper[4869]: I0314 10:25:21.704177 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:25:21 crc kubenswrapper[4869]: E0314 10:25:21.705268 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:25:22 crc kubenswrapper[4869]: I0314 10:25:22.705185 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:25:22 crc kubenswrapper[4869]: E0314 10:25:22.706039 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:25:29 crc kubenswrapper[4869]: I0314 10:25:29.708143 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:25:29 crc kubenswrapper[4869]: E0314 10:25:29.713684 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:25:33 crc kubenswrapper[4869]: I0314 10:25:33.705098 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:25:33 crc kubenswrapper[4869]: E0314 10:25:33.706382 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:25:35 crc kubenswrapper[4869]: I0314 10:25:35.704720 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:25:35 crc kubenswrapper[4869]: E0314 10:25:35.705361 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:25:44 crc kubenswrapper[4869]: I0314 10:25:44.704777 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:25:44 crc kubenswrapper[4869]: E0314 10:25:44.706058 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:25:46 crc kubenswrapper[4869]: I0314 10:25:46.704910 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:25:46 crc kubenswrapper[4869]: E0314 10:25:46.705295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:25:46 crc kubenswrapper[4869]: I0314 10:25:46.705928 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:25:46 crc kubenswrapper[4869]: E0314 10:25:46.706131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:25:56 crc kubenswrapper[4869]: I0314 10:25:56.705058 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:25:56 crc kubenswrapper[4869]: E0314 10:25:56.706472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:25:59 crc kubenswrapper[4869]: I0314 10:25:59.704377 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:25:59 crc kubenswrapper[4869]: E0314 10:25:59.704903 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.147026 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29558066-8cwh9"] Mar 14 10:26:00 crc kubenswrapper[4869]: E0314 10:26:00.147492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="copy" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="copy" Mar 14 10:26:00 crc kubenswrapper[4869]: E0314 10:26:00.148586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="gather" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148595 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="gather" Mar 14 10:26:00 crc kubenswrapper[4869]: E0314 10:26:00.148617 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b962e-912c-4dcc-b946-2da03717936a" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148627 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b962e-912c-4dcc-b946-2da03717936a" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148894 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="gather" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148931 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c47849-4852-4379-8f28-97955656e693" containerName="copy" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.148944 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b962e-912c-4dcc-b946-2da03717936a" containerName="oc" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.149741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.151995 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.152051 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p2lfc" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.153382 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.165490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558066-8cwh9"] Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.238334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwg7\" (UniqueName: \"kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7\") pod \"auto-csr-approver-29558066-8cwh9\" (UID: \"d4c4d2a6-d647-4e37-8881-12fa21b8f75f\") " pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.340433 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dwg7\" (UniqueName: \"kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7\") pod \"auto-csr-approver-29558066-8cwh9\" (UID: \"d4c4d2a6-d647-4e37-8881-12fa21b8f75f\") " pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.357024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dwg7\" (UniqueName: \"kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7\") pod \"auto-csr-approver-29558066-8cwh9\" (UID: \"d4c4d2a6-d647-4e37-8881-12fa21b8f75f\") " pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.468455 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.706020 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:26:00 crc kubenswrapper[4869]: E0314 10:26:00.706835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:26:00 crc kubenswrapper[4869]: I0314 10:26:00.978658 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29558066-8cwh9"] Mar 14 10:26:00 crc kubenswrapper[4869]: W0314 10:26:00.989684 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c4d2a6_d647_4e37_8881_12fa21b8f75f.slice/crio-0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2 WatchSource:0}: Error finding container 0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2: Status 404 returned error can't find the container with id 0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2 Mar 14 10:26:01 crc kubenswrapper[4869]: I0314 10:26:01.539319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" event={"ID":"d4c4d2a6-d647-4e37-8881-12fa21b8f75f","Type":"ContainerStarted","Data":"0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2"} Mar 14 10:26:03 crc kubenswrapper[4869]: I0314 10:26:03.579841 4869 generic.go:334] "Generic (PLEG): container finished" podID="d4c4d2a6-d647-4e37-8881-12fa21b8f75f" containerID="3cfe7a00a983b1529df3ac05a7428646a8e5408d5f0df1ed57430d200765da25" exitCode=0 Mar 14 10:26:03 crc kubenswrapper[4869]: I0314 10:26:03.579917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" event={"ID":"d4c4d2a6-d647-4e37-8881-12fa21b8f75f","Type":"ContainerDied","Data":"3cfe7a00a983b1529df3ac05a7428646a8e5408d5f0df1ed57430d200765da25"} Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.407748 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.443920 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dwg7\" (UniqueName: \"kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7\") pod \"d4c4d2a6-d647-4e37-8881-12fa21b8f75f\" (UID: \"d4c4d2a6-d647-4e37-8881-12fa21b8f75f\") " Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.453687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7" (OuterVolumeSpecName: "kube-api-access-7dwg7") pod "d4c4d2a6-d647-4e37-8881-12fa21b8f75f" (UID: "d4c4d2a6-d647-4e37-8881-12fa21b8f75f"). InnerVolumeSpecName "kube-api-access-7dwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.546196 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dwg7\" (UniqueName: \"kubernetes.io/projected/d4c4d2a6-d647-4e37-8881-12fa21b8f75f-kube-api-access-7dwg7\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.602424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" event={"ID":"d4c4d2a6-d647-4e37-8881-12fa21b8f75f","Type":"ContainerDied","Data":"0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2"} Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.602463 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e5a07c21d0d4d13a85a32b7c40afca2c5bc7303c015558b52a91bea615ae6a2" Mar 14 10:26:05 crc kubenswrapper[4869]: I0314 10:26:05.602478 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29558066-8cwh9" Mar 14 10:26:06 crc kubenswrapper[4869]: I0314 10:26:06.518450 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-hm855"] Mar 14 10:26:06 crc kubenswrapper[4869]: I0314 10:26:06.532940 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29558060-hm855"] Mar 14 10:26:07 crc kubenswrapper[4869]: I0314 10:26:07.716995 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1442f731-fd67-4328-a0e3-da23c33e97d7" path="/var/lib/kubelet/pods/1442f731-fd67-4328-a0e3-da23c33e97d7/volumes" Mar 14 10:26:09 crc kubenswrapper[4869]: I0314 10:26:09.515809 4869 scope.go:117] "RemoveContainer" containerID="a3dbc60fd5dfc05d7b857cd76795539e69cd81a7354daf05c8803727bef115f7" Mar 14 10:26:09 crc kubenswrapper[4869]: I0314 10:26:09.704444 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:26:09 crc kubenswrapper[4869]: E0314 10:26:09.704713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:26:10 crc kubenswrapper[4869]: I0314 10:26:10.704426 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:26:10 crc kubenswrapper[4869]: E0314 10:26:10.705287 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:26:15 crc kubenswrapper[4869]: I0314 10:26:15.705518 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:26:15 crc kubenswrapper[4869]: E0314 10:26:15.706209 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:26:21 crc kubenswrapper[4869]: I0314 10:26:21.704660 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:26:21 crc kubenswrapper[4869]: E0314 10:26:21.705734 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:26:22 crc kubenswrapper[4869]: I0314 10:26:22.703223 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:26:22 crc kubenswrapper[4869]: E0314 10:26:22.703702 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:26:27 crc kubenswrapper[4869]: I0314 10:26:27.713422 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:26:27 crc kubenswrapper[4869]: E0314 10:26:27.715161 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:26:34 crc kubenswrapper[4869]: I0314 10:26:34.705089 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:26:34 crc kubenswrapper[4869]: I0314 10:26:34.705768 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:26:34 crc kubenswrapper[4869]: E0314 10:26:34.705974 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:26:34 crc kubenswrapper[4869]: E0314 10:26:34.706266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:26:41 crc kubenswrapper[4869]: I0314 10:26:41.705753 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:26:41 crc kubenswrapper[4869]: E0314 10:26:41.708891 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.376930 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:26:46 crc kubenswrapper[4869]: E0314 10:26:46.378169 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c4d2a6-d647-4e37-8881-12fa21b8f75f" containerName="oc" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.378183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c4d2a6-d647-4e37-8881-12fa21b8f75f" containerName="oc" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.378379 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c4d2a6-d647-4e37-8881-12fa21b8f75f" containerName="oc" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.379790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.388984 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.552228 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzx7\" (UniqueName: \"kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.552463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.552664 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.654169 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.654247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.654353 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzx7\" (UniqueName: \"kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.654937 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.654992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.683907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzx7\" (UniqueName: \"kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7\") pod \"certified-operators-ww54h\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:46 crc kubenswrapper[4869]: I0314 10:26:46.703896 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:47 crc kubenswrapper[4869]: I0314 10:26:47.260488 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:26:47 crc kubenswrapper[4869]: I0314 10:26:47.710140 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:26:47 crc kubenswrapper[4869]: E0314 10:26:47.710672 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:26:48 crc kubenswrapper[4869]: I0314 10:26:48.039767 4869 generic.go:334] "Generic (PLEG): container finished" podID="5334a673-f193-4db3-bbcb-d257272d82f5" containerID="1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6" exitCode=0 Mar 14 10:26:48 crc kubenswrapper[4869]: I0314 10:26:48.039871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerDied","Data":"1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6"} Mar 14 10:26:48 crc kubenswrapper[4869]: I0314 10:26:48.039947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerStarted","Data":"e5b7d49774361607444cc8ef0d742fbd4ac43378dab019762a735b83a60e13d5"} Mar 14 10:26:48 crc kubenswrapper[4869]: I0314 10:26:48.704488 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:26:48 crc kubenswrapper[4869]: E0314 10:26:48.705208 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:26:49 crc kubenswrapper[4869]: I0314 10:26:49.054337 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerStarted","Data":"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc"} Mar 14 10:26:50 crc kubenswrapper[4869]: I0314 10:26:50.063704 4869 generic.go:334] "Generic (PLEG): container finished" podID="5334a673-f193-4db3-bbcb-d257272d82f5" containerID="8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc" exitCode=0 Mar 14 10:26:50 crc kubenswrapper[4869]: I0314 10:26:50.063784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerDied","Data":"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc"} Mar 14 10:26:51 crc kubenswrapper[4869]: I0314 10:26:51.074847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerStarted","Data":"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44"} Mar 14 10:26:53 crc kubenswrapper[4869]: I0314 10:26:53.705083 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:26:53 crc kubenswrapper[4869]: E0314 10:26:53.706189 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:26:56 crc kubenswrapper[4869]: I0314 10:26:56.705294 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:56 crc kubenswrapper[4869]: I0314 10:26:56.706820 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:56 crc kubenswrapper[4869]: I0314 10:26:56.799174 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:56 crc kubenswrapper[4869]: I0314 10:26:56.820855 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ww54h" podStartSLOduration=8.400835289 podStartE2EDuration="10.820823352s" podCreationTimestamp="2026-03-14 10:26:46 +0000 UTC" firstStartedPulling="2026-03-14 10:26:48.042653214 +0000 UTC m=+5361.014935317" lastFinishedPulling="2026-03-14 10:26:50.462641287 +0000 UTC m=+5363.434923380" observedRunningTime="2026-03-14 10:26:51.10363476 +0000 UTC m=+5364.075916873" watchObservedRunningTime="2026-03-14 10:26:56.820823352 +0000 UTC m=+5369.793105415" Mar 14 10:26:57 crc kubenswrapper[4869]: I0314 10:26:57.212254 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:57 crc kubenswrapper[4869]: I0314 10:26:57.272206 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.154558 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ww54h" podUID="5334a673-f193-4db3-bbcb-d257272d82f5" containerName="registry-server" containerID="cri-o://31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44" gracePeriod=2 Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.772436 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.836804 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities\") pod \"5334a673-f193-4db3-bbcb-d257272d82f5\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.836930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content\") pod \"5334a673-f193-4db3-bbcb-d257272d82f5\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.837119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzx7\" (UniqueName: \"kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7\") pod \"5334a673-f193-4db3-bbcb-d257272d82f5\" (UID: \"5334a673-f193-4db3-bbcb-d257272d82f5\") " Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.838906 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities" (OuterVolumeSpecName: "utilities") pod "5334a673-f193-4db3-bbcb-d257272d82f5" (UID: "5334a673-f193-4db3-bbcb-d257272d82f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.856366 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7" (OuterVolumeSpecName: "kube-api-access-7gzx7") pod "5334a673-f193-4db3-bbcb-d257272d82f5" (UID: "5334a673-f193-4db3-bbcb-d257272d82f5"). InnerVolumeSpecName "kube-api-access-7gzx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.932844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5334a673-f193-4db3-bbcb-d257272d82f5" (UID: "5334a673-f193-4db3-bbcb-d257272d82f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.939218 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzx7\" (UniqueName: \"kubernetes.io/projected/5334a673-f193-4db3-bbcb-d257272d82f5-kube-api-access-7gzx7\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.939247 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-utilities\") on node \"crc\" DevicePath \"\"" Mar 14 10:26:59 crc kubenswrapper[4869]: I0314 10:26:59.939256 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5334a673-f193-4db3-bbcb-d257272d82f5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.165906 4869 generic.go:334] "Generic (PLEG): container finished" podID="5334a673-f193-4db3-bbcb-d257272d82f5" containerID="31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44" exitCode=0 Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.165966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerDied","Data":"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44"} Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.166009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ww54h" event={"ID":"5334a673-f193-4db3-bbcb-d257272d82f5","Type":"ContainerDied","Data":"e5b7d49774361607444cc8ef0d742fbd4ac43378dab019762a735b83a60e13d5"} Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.166005 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ww54h" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.166033 4869 scope.go:117] "RemoveContainer" containerID="31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.219445 4869 scope.go:117] "RemoveContainer" containerID="8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.220785 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.230337 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ww54h"] Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.257542 4869 scope.go:117] "RemoveContainer" containerID="1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.309394 4869 scope.go:117] "RemoveContainer" containerID="31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44" Mar 14 10:27:00 crc kubenswrapper[4869]: E0314 10:27:00.309974 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44\": container with ID starting with 31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44 not found: ID does not exist" containerID="31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.310015 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44"} err="failed to get container status \"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44\": rpc error: code = NotFound desc = could not find container \"31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44\": container with ID starting with 31003b227853ce031ea59753494c23b3247a70f4144b6aa614e2f7b6b4bb6b44 not found: ID does not exist" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.310042 4869 scope.go:117] "RemoveContainer" containerID="8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc" Mar 14 10:27:00 crc kubenswrapper[4869]: E0314 10:27:00.310329 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc\": container with ID starting with 8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc not found: ID does not exist" containerID="8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.310357 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc"} err="failed to get container status \"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc\": rpc error: code = NotFound desc = could not find container \"8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc\": container with ID starting with 8b8f2ce6921d09fc095eaf4f6dac356e5ab2671a4bd4d1e18cd3f0ccf88e25fc not found: ID does not exist" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.310372 4869 scope.go:117] "RemoveContainer" containerID="1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6" Mar 14 10:27:00 crc kubenswrapper[4869]: E0314 10:27:00.310883 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6\": container with ID starting with 1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6 not found: ID does not exist" containerID="1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6" Mar 14 10:27:00 crc kubenswrapper[4869]: I0314 10:27:00.311003 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6"} err="failed to get container status \"1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6\": rpc error: code = NotFound desc = could not find container \"1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6\": container with ID starting with 1e43aa4e4ac5cd8f228ef0c179d79d185df5c221faa60d601d4d5b7b0b27dea6 not found: ID does not exist" Mar 14 10:27:01 crc kubenswrapper[4869]: I0314 10:27:01.717088 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5334a673-f193-4db3-bbcb-d257272d82f5" path="/var/lib/kubelet/pods/5334a673-f193-4db3-bbcb-d257272d82f5/volumes" Mar 14 10:27:02 crc kubenswrapper[4869]: I0314 10:27:02.703996 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:27:02 crc kubenswrapper[4869]: I0314 10:27:02.704097 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:27:02 crc kubenswrapper[4869]: E0314 10:27:02.704387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:27:02 crc kubenswrapper[4869]: E0314 10:27:02.704398 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:27:04 crc kubenswrapper[4869]: I0314 10:27:04.704155 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:27:04 crc kubenswrapper[4869]: E0314 10:27:04.704794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:27:15 crc kubenswrapper[4869]: I0314 10:27:15.705815 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:27:15 crc kubenswrapper[4869]: E0314 10:27:15.708213 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:27:16 crc kubenswrapper[4869]: I0314 10:27:16.704318 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:27:16 crc kubenswrapper[4869]: E0314 10:27:16.704602 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1" Mar 14 10:27:16 crc kubenswrapper[4869]: I0314 10:27:16.704620 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:27:16 crc kubenswrapper[4869]: E0314 10:27:16.705076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:27:20 crc kubenswrapper[4869]: I0314 10:27:20.996833 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="2b16088c-48ba-4c09-91b1-a0447bced81b" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:27:20 crc kubenswrapper[4869]: I0314 10:27:20.996833 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="2b16088c-48ba-4c09-91b1-a0447bced81b" containerName="galera" probeResult="failure" output="command timed out" Mar 14 10:27:29 crc kubenswrapper[4869]: I0314 10:27:29.703818 4869 scope.go:117] "RemoveContainer" containerID="9695f0aec3ee96198ef11242a93ca785003d67c1a2f268dcc3baee64dc2ab8e6" Mar 14 10:27:29 crc kubenswrapper[4869]: I0314 10:27:29.706178 4869 scope.go:117] "RemoveContainer" containerID="1f30cb71e45520f647b8fb7d33843b9a6213a06ca01944ac0eba5a455e6617d5" Mar 14 10:27:29 crc kubenswrapper[4869]: E0314 10:27:29.706737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jj985_openshift-machine-config-operator(e08d1ace-1d27-4a7d-b08e-c245a103c56f)\"" pod="openshift-machine-config-operator/machine-config-daemon-jj985" podUID="e08d1ace-1d27-4a7d-b08e-c245a103c56f" Mar 14 10:27:29 crc kubenswrapper[4869]: E0314 10:27:29.706763 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-6b646449c6-8g8ql_openstack(c776b1be-07b2-4de0-808f-48c9a550aaa4)\"" pod="openstack/horizon-6b646449c6-8g8ql" podUID="c776b1be-07b2-4de0-808f-48c9a550aaa4" Mar 14 10:27:30 crc kubenswrapper[4869]: I0314 10:27:30.705920 4869 scope.go:117] "RemoveContainer" containerID="6e8b61349cebb7204cc049e48c899bbe1ba89ad23524d3d45194ef428d15b29b" Mar 14 10:27:30 crc kubenswrapper[4869]: E0314 10:27:30.706437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=horizon pod=horizon-9d48d6888-26pm7_openstack(90750956-6a92-4c2c-8213-07cd62712ba1)\"" pod="openstack/horizon-9d48d6888-26pm7" podUID="90750956-6a92-4c2c-8213-07cd62712ba1"